title
stringlengths 2
153
| abstract
stringlengths 0
2.15k
| review
stringlengths 97
8.43k
| source
stringclasses 4
values | review_src
stringclasses 1
value | thinking_trace
stringclasses 1
value | conversations
listlengths 2
2
| review_sentences
sequencelengths 1
61
| criteria_count
dict | criticism
float64 0
1
| example
float64 0
1
| importance_and_relevance
float64 0
1
| materials_and_methods
float64 0
1
| praise
float64 0
1
| presentation_and_reporting
float64 0
1
| results_and_discussion
float64 0
1
| suggestion_and_solution
float64 0
1
| total
int64 1
61
| has_criticism
int64 0
1
| has_example
int64 0
1
| has_importance_and_relevance
int64 0
1
| has_materials_and_methods
int64 0
1
| has_praise
int64 0
1
| has_presentation_and_reporting
int64 0
1
| has_results_and_discussion
int64 0
1
| has_suggestion_and_solution
int64 0
1
| reward_value
float64 2.13
6
| reward_value_length_adjusted
float64 -34.13
3.73
| length_penalty
float64 0
36.4
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
An empirical study on evaluation metrics of generative adversarial networks | Despite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect. | This paper introduces a comparison between several approaches for evaluating GANs. The authors consider the setting of a pre-trained image models as generic representations of generated and real images to be compared. They compare the evaluation methods based on five criteria termed disciminability, mode collapsing and mode dropping, sample efficiency,computation efficiency, and robustness to transformation. This paper has some interesting insights and a few ideas of how to validate an evaluation method. The topic is an important one and a very difficult one. However, the work has some problems in rigor and justification and the conclusions are overstated in my view. Pros -Several interesting ideas for evaluating evaluation metrics are proposed -The authors tackle a very challenging subject Cons -It is not clear why GANs are the only generative model considered -Unprecedented visual quality as compared to other generative models has brought the GAN to prominence and yet this is not really a big factor in this paper. -The evaluations rely on using a pre-trained imagenet model as a representation. The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results. The use of learned representations needs more rigorous justification -The evaluation for discriminative metric, increased score when mix of real and unreal increases, is interesting but it is not convincing as the sole evaluation for “discriminativeness” and seems like something that can be gamed. - The authors implicitly contradict the argument of Theis et al against monolithic evaluation metrics for generative models, but this is not strongly supported. Several references I suggest: https://arxiv.org/abs/1706.08500 (FID score) https://arxiv.org/abs/1511.04581 (MMD as evaluation) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn empirical study on evaluation metrics of generative adversarial networks\nDespite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a comparison between several approaches for evaluating GANs. The authors consider the setting of a pre-trained image models as generic representations of generated and real images to be compared. They compare the evaluation methods based on five criteria termed disciminability, mode collapsing and mode dropping, sample efficiency,computation efficiency, and robustness to transformation. This paper has some interesting insights and a few ideas of how to validate an evaluation method. The topic is an important one and a very difficult one. However, the work has some problems in rigor and justification and the conclusions are overstated in my view. Pros -Several interesting ideas for evaluating evaluation metrics are proposed -The authors tackle a very challenging subject Cons -It is not clear why GANs are the only generative model considered -Unprecedented visual quality as compared to other generative models has brought the GAN to prominence and yet this is not really a big factor in this paper. -The evaluations rely on using a pre-trained imagenet model as a representation. The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results. The use of learned representations needs more rigorous justification -The evaluation for discriminative metric, increased score when mix of real and unreal increases, is interesting but it is not convincing as the sole evaluation for “discriminativeness” and seems like something that can be gamed. - The authors implicitly contradict the argument of Theis et al against monolithic evaluation metrics for generative models, but this is not strongly supported. Several references I suggest: https://arxiv.org/abs/1706.08500 (FID score) https://arxiv.org/abs/1511.04581 (MMD as evaluation)",
"role": "assistant"
}
]
| [
" This paper introduces a comparison between several approaches for evaluating GANs.",
"The authors consider the setting of a pre-trained image models as generic representations of generated and real images to be compared.",
"They compare the evaluation methods based on five criteria termed disciminability, mode collapsing and mode dropping, sample efficiency,computation efficiency, and robustness to transformation.",
"This paper has some interesting insights and a few ideas of how to validate an evaluation method.",
"The topic is an important one and a very difficult one.",
"However, the work has some problems in rigor and justification and the conclusions are overstated in my view.",
"Pros -Several interesting ideas for evaluating evaluation metrics are proposed -The authors tackle a very challenging subject Cons -It is not clear why GANs are the only generative model considered -Unprecedented visual quality as compared to other generative models has brought the GAN to prominence and yet this is not really a big factor in this paper.",
"-The evaluations rely on using a pre-trained imagenet model as a representation.",
"The authors point out that different architectures yield similar results for their analysis, however it is not clear how the biases of the learned representations affect the results.",
"The use of learned representations needs more rigorous justification -The evaluation for discriminative metric, increased score when mix of real and unreal increases, is interesting but it is not convincing as the sole evaluation for “discriminativeness” and seems like something that can be gamed.",
"- The authors implicitly contradict the argument of Theis et al against monolithic evaluation metrics for generative models, but this is not strongly supported.",
"Several references I suggest: https://arxiv.org/abs/1706.08500 (FID score) https://arxiv.org/abs/1511.04581 (MMD as evaluation)"
] | {
"criticism": 6,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 12
} | 0.5 | 0 | 0.25 | 0.833333 | 0.333333 | 0 | 0.25 | 0.25 | 12 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.416667 | 2.400884 | 0.015782 |
An empirical study on evaluation metrics of generative adversarial networks | Despite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect. | The paper describes an empirical evaluation of some of the most common metrics to evaluate GANs (inception score, mode score, kernel MMD, Wasserstein distance and LOO accuracy). The paper is well written, clear, organized and easy to follow. Given that the underlying application is image generation, the authors move from a pixel representation of images to using the feature representation given by a pre-trained ResNet, which is key in their results and further comparisons. They analyzed discriminability, mode collapsing and dropping, robustness to transformations, efficiency and overfitting. Although this work and its results are very useful for practitioners, it lacks in two aspects. First, it only considers a single task for which GANs are very popular. Second, it could benefit from a deeper (maybe theoretical analysis) of some of the questions. Some of the conclusions could be further clarified with additional experiments (e.g., Sec 3.6 ‘while the reason that RMS also fails to detect overfitting may again be its lack of generalization to datasets with classes not contained in the ImageNet dataset’). | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn empirical study on evaluation metrics of generative adversarial networks\nDespite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper describes an empirical evaluation of some of the most common metrics to evaluate GANs (inception score, mode score, kernel MMD, Wasserstein distance and LOO accuracy). The paper is well written, clear, organized and easy to follow. Given that the underlying application is image generation, the authors move from a pixel representation of images to using the feature representation given by a pre-trained ResNet, which is key in their results and further comparisons. They analyzed discriminability, mode collapsing and dropping, robustness to transformations, efficiency and overfitting. Although this work and its results are very useful for practitioners, it lacks in two aspects. First, it only considers a single task for which GANs are very popular. Second, it could benefit from a deeper (maybe theoretical analysis) of some of the questions. Some of the conclusions could be further clarified with additional experiments (e.g., Sec 3.6 ‘while the reason that RMS also fails to detect overfitting may again be its lack of generalization to datasets with classes not contained in the ImageNet dataset’).",
"role": "assistant"
}
]
| [
" The paper describes an empirical evaluation of some of the most common metrics to evaluate GANs (inception score, mode score, kernel MMD, Wasserstein distance and LOO accuracy).",
"The paper is well written, clear, organized and easy to follow.",
"Given that the underlying application is image generation, the authors move from a pixel representation of images to using the feature representation given by a pre-trained ResNet, which is key in their results and further comparisons.",
"They analyzed discriminability, mode collapsing and dropping, robustness to transformations, efficiency and overfitting.",
"Although this work and its results are very useful for practitioners, it lacks in two aspects.",
"First, it only considers a single task for which GANs are very popular.",
"Second, it could benefit from a deeper (maybe theoretical analysis) of some of the questions.",
"Some of the conclusions could be further clarified with additional experiments (e.g., Sec 3.6 ‘while the reason that RMS also fails to detect overfitting may again be its lack of generalization to datasets with classes not contained in the ImageNet dataset’)."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 8
} | 0.125 | 0.125 | 0.125 | 0.625 | 0.25 | 0.25 | 0.375 | 0.25 | 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
An empirical study on evaluation metrics of generative adversarial networks | Despite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect. | In the paper, the authors discuss several GAN evaluation metrics. Specifically, the authors pointed out some desirable properties that GANS evaluation metrics should satisfy. For those properties raised, the authors experimentally evaluated whether existing metrics satisfy those properties or not. Section 4 summarizes the results, which concluded that the Kernel MMD and 1-NN classifier in the feature space are so far recommended metrics to be used. I think this paper tackles an interesting and important problem, what metrics are preferred for evaluating GANs. In particular, the authors showed that Inception Score, which is one of the most popular metric, is actually not preferred for several reasons. The result, comparing data distributions and the distribution of the generator would be the preferred choice (that can be attained by Kernel MMD and 1-NN classifier), seems to be reasonable. This would not be a surprising result as the ultimate goal of GAN is mimicking the data distribution. However, the result is supported by exhaustive experiments making the result highly convincing. Overall, I think this paper is worthy for acceptance as several GAN methods are proposed and good evaluation metrics are needed for further improvements of the research field. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn empirical study on evaluation metrics of generative adversarial networks\nDespite the widespread interest in generative adversarial networks (GANs), few works have studied the metrics that quantitatively evaluate GANs* performance. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the important problem of emph{how to evaluate the evaluation metrics}. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. Then with a series of carefully designed experiments, we are able to comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbour (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far these state-of-the-art GANs are from perfect.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In the paper, the authors discuss several GAN evaluation metrics. Specifically, the authors pointed out some desirable properties that GANS evaluation metrics should satisfy. For those properties raised, the authors experimentally evaluated whether existing metrics satisfy those properties or not. Section 4 summarizes the results, which concluded that the Kernel MMD and 1-NN classifier in the feature space are so far recommended metrics to be used. I think this paper tackles an interesting and important problem, what metrics are preferred for evaluating GANs. In particular, the authors showed that Inception Score, which is one of the most popular metric, is actually not preferred for several reasons. The result, comparing data distributions and the distribution of the generator would be the preferred choice (that can be attained by Kernel MMD and 1-NN classifier), seems to be reasonable. This would not be a surprising result as the ultimate goal of GAN is mimicking the data distribution. However, the result is supported by exhaustive experiments making the result highly convincing. Overall, I think this paper is worthy for acceptance as several GAN methods are proposed and good evaluation metrics are needed for further improvements of the research field.",
"role": "assistant"
}
]
| [
" In the paper, the authors discuss several GAN evaluation metrics.",
"Specifically, the authors pointed out some desirable properties that GANS evaluation metrics should satisfy.",
"For those properties raised, the authors experimentally evaluated whether existing metrics satisfy those properties or not.",
"Section 4 summarizes the results, which concluded that the Kernel MMD and 1-NN classifier in the feature space are so far recommended metrics to be used.",
"I think this paper tackles an interesting and important problem, what metrics are preferred for evaluating GANs.",
"In particular, the authors showed that Inception Score, which is one of the most popular metric, is actually not preferred for several reasons.",
"The result, comparing data distributions and the distribution of the generator would be the preferred choice (that can be attained by Kernel MMD and 1-NN classifier), seems to be reasonable.",
"This would not be a surprising result as the ultimate goal of GAN is mimicking the data distribution.",
"However, the result is supported by exhaustive experiments making the result highly convincing.",
"Overall, I think this paper is worthy for acceptance as several GAN methods are proposed and good evaluation metrics are needed for further improvements of the research field."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 10,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 6,
"suggestion_and_solution": 2,
"total": 10
} | 0 | 0 | 0.2 | 1 | 0.3 | 0.2 | 0.6 | 0.2 | 10 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 2.357958 | 0.142042 |
Generating Wikipedia by Summarizing Long Sequences | We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations. | This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a target Wikipedia page. The authors at first reduce the input size by using various extractive strategies and then use the selected content as input to the abstractive stage where they leverage the Transformer architecture with interesting modifications like dropping the encoder and proposing alternate self-attention mechanisms like local and memory compressed attention. In general, the paper is well-written and the main ideas are clear. However, my main concern is the evaluation. It would have been nice to see how the proposed methods perform with respect to the existing neural abstractive summarization approaches. Although authors argue in Section 2.1 that existing neural approaches are applied to other kinds of datasets where the input/output size ratios are smaller, experiments could have been performed to show their impact. Furthermore, I really expected to see a comparison with Sauper & Barzilay (2009)*s non-neural extractive approach of Wikipedia article generation, which could certainly strengthen the technical merit of the paper. More importantly, it was not clear from the paper if there was a constraint on the output length when each model generated the Wikipedia content. For example, Figure 5-7 show variable sizes of the generated outputs. With a fixed reference/target Wikipedia article, if different models generate variable sizes of output, ROUGE evaluation could easily pose a bias on a longer output as it essentially counts overlaps between the system output and the reference. It would have been nice to know if the proposed attention mechanisms account for significantly better results than the T-ED and T-D architectures. Did you run any statistical significance test on the evaluation results? Authors claim that the proposed model can generate "fluent, coherent" output, however, no evaluation has been conducted to justify this claim. The human evaluation only compares two alternative models for preference, which is not enough to support this claim. I would suggest to carry out a DUC-style user evaluation (http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt) methodology to really show that the proposed method works well for abstractive summarization. Does Figure 8 show an example input after the extractive stage or before? Please clarify. --------------- I have updated my scores as authors clarified most of my concerns. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerating Wikipedia by Summarizing Long Sequences\nWe show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder- decoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a target Wikipedia page. The authors at first reduce the input size by using various extractive strategies and then use the selected content as input to the abstractive stage where they leverage the Transformer architecture with interesting modifications like dropping the encoder and proposing alternate self-attention mechanisms like local and memory compressed attention. In general, the paper is well-written and the main ideas are clear. However, my main concern is the evaluation. It would have been nice to see how the proposed methods perform with respect to the existing neural abstractive summarization approaches. Although authors argue in Section 2.1 that existing neural approaches are applied to other kinds of datasets where the input/output size ratios are smaller, experiments could have been performed to show their impact. Furthermore, I really expected to see a comparison with Sauper & Barzilay (2009)*s non-neural extractive approach of Wikipedia article generation, which could certainly strengthen the technical merit of the paper. More importantly, it was not clear from the paper if there was a constraint on the output length when each model generated the Wikipedia content. For example, Figure 5-7 show variable sizes of the generated outputs. With a fixed reference/target Wikipedia article, if different models generate variable sizes of output, ROUGE evaluation could easily pose a bias on a longer output as it essentially counts overlaps between the system output and the reference. It would have been nice to know if the proposed attention mechanisms account for significantly better results than the T-ED and T-D architectures. Did you run any statistical significance test on the evaluation results? Authors claim that the proposed model can generate \"fluent, coherent\" output, however, no evaluation has been conducted to justify this claim. The human evaluation only compares two alternative models for preference, which is not enough to support this claim. I would suggest to carry out a DUC-style user evaluation (http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt) methodology to really show that the proposed method works well for abstractive summarization. Does Figure 8 show an example input after the extractive stage or before? Please clarify. --------------- I have updated my scores as authors clarified most of my concerns.",
"role": "assistant"
}
]
| [
" This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a target Wikipedia page.",
"The authors at first reduce the input size by using various extractive strategies and then use the selected content as input to the abstractive stage where they leverage the Transformer architecture with interesting modifications like dropping the encoder and proposing alternate self-attention mechanisms like local and memory compressed attention.",
"In general, the paper is well-written and the main ideas are clear.",
"However, my main concern is the evaluation.",
"It would have been nice to see how the proposed methods perform with respect to the existing neural abstractive summarization approaches.",
"Although authors argue in Section 2.1 that existing neural approaches are applied to other kinds of datasets where the input/output size ratios are smaller, experiments could have been performed to show their impact.",
"Furthermore, I really expected to see a comparison with Sauper & Barzilay (2009)*s non-neural extractive approach of Wikipedia article generation, which could certainly strengthen the technical merit of the paper.",
"More importantly, it was not clear from the paper if there was a constraint on the output length when each model generated the Wikipedia content.",
"For example, Figure 5-7 show variable sizes of the generated outputs.",
"With a fixed reference/target Wikipedia article, if different models generate variable sizes of output, ROUGE evaluation could easily pose a bias on a longer output as it essentially counts overlaps between the system output and the reference.",
"It would have been nice to know if the proposed attention mechanisms account for significantly better results than the T-ED and T-D architectures.",
"Did you run any statistical significance test on the evaluation results?",
"Authors claim that the proposed model can generate \"fluent, coherent\" output, however, no evaluation has been conducted to justify this claim.",
"The human evaluation only compares two alternative models for preference, which is not enough to support this claim.",
"I would suggest to carry out a DUC-style user evaluation (http://www-nlpir.nist.gov/projects/duc/duc2007/quality-questions.txt) methodology to really show that the proposed method works well for abstractive summarization.",
"Does Figure 8 show an example input after the extractive stage or before?",
"Please clarify.",
"--------------- I have updated my scores as authors clarified most of my concerns."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 7,
"total": 18
} | 0.222222 | 0.111111 | 0.111111 | 0.777778 | 0.166667 | 0.277778 | 0.222222 | 0.388889 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.277778 | 1.883217 | 0.394561 |
Loss-aware Weight Quantization of Deep Networks | The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network. | This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights). The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose. The experiments are very clearly presented and solidly designed. In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods. Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm. However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ( ). This may not be relevant for the practical results, but to be accurate, it can*t be simply stated that the algorithm converges, without a more careful analysis. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLoss-aware Weight Quantization of Deep Networks\nThe huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights). The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose. The experiments are very clearly presented and solidly designed. In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides. Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods. Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm. However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ( ). This may not be relevant for the practical results, but to be accurate, it can*t be simply stated that the algorithm converges, without a more careful analysis.",
"role": "assistant"
}
]
| [
" This paper proposes a new method to train DNNs with quantized weights, by including the quantization as a constraint in a proximal quasi-Newton algorithm, which simultaneously learns a scaling for the quantized values (possibly different for positive and negative weights).",
"The paper is very clearly written, and the proposal is very well placed in the context of previous methods for the same purpose.",
"The experiments are very clearly presented and solidly designed.",
"In fact, the paper is a somewhat simple extension of the method proposed by Hou, Yao, and Kwok (2017), which is where the novelty resides.",
"Consequently, there is not a great degree of novelty in terms of the proposed method, and the results are only slightly better than those of previous methods.",
"Finally, in terms of analysis of the algorithm, the authors simply invoke a theorem from Hou, Yao, and Kwok (2017), which claims convergence of the proposed algorithm.",
"However, what is shown in that paper is that the sequence of loss function values converges, which does not imply that the sequence of weight estimates also converges, because of the presence of a non-convex constraint ( ).",
"This may not be relevant for the practical results, but to be accurate, it can*t be simply stated that the algorithm converges, without a more careful analysis."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 8
} | 0.375 | 0 | 0.25 | 0.875 | 0.25 | 0.375 | 0.375 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.625 | 2.230439 | 0.394561 |
Loss-aware Weight Quantization of Deep Networks | The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network. | In this paper, the authors propose a method of compressing network by means of weight ternarization. The network weights ternatization is formulated in the form of loss-aware quantization, which originally proposed by Hou et al. (2017). To this reviewer’s understanding, the proposed method can be regarded as the extension of the previous work of LAB and TWN, which can be the main contribution of the work. While the proposed method achieved promising results compared to the competing methods, it is still necessary to compare their computational complexity, which is one of the main concerns in network compression. It would be appreciated to have discussion on the results in Table 2, which tells that the performance of quantized networks is better than the full-precision network. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLoss-aware Weight Quantization of Deep Networks\nThe huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors propose a method of compressing network by means of weight ternarization. The network weights ternatization is formulated in the form of loss-aware quantization, which originally proposed by Hou et al. (2017). To this reviewer’s understanding, the proposed method can be regarded as the extension of the previous work of LAB and TWN, which can be the main contribution of the work. While the proposed method achieved promising results compared to the competing methods, it is still necessary to compare their computational complexity, which is one of the main concerns in network compression. It would be appreciated to have discussion on the results in Table 2, which tells that the performance of quantized networks is better than the full-precision network.",
"role": "assistant"
}
]
| [
" In this paper, the authors propose a method of compressing network by means of weight ternarization.",
"The network weights ternatization is formulated in the form of loss-aware quantization, which originally proposed by Hou et al.",
"(2017).",
"To this reviewer’s understanding, the proposed method can be regarded as the extension of the previous work of LAB and TWN, which can be the main contribution of the work.",
"While the proposed method achieved promising results compared to the competing methods, it is still necessary to compare their computational complexity, which is one of the main concerns in network compression.",
"It would be appreciated to have discussion on the results in Table 2, which tells that the performance of quantized networks is better than the full-precision network."
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 6
} | 0 | 0.166667 | 0.166667 | 0.833333 | 0.166667 | 0 | 0.333333 | 0.5 | 6 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Large scale distributed neural network training through online distillation | Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing tokens and based on the Common Crawl repository of web data. | The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss. The proposed method is applied to two large-scale datasets and showed to perform better than other baselines such as label smoothing, and the standard ensemble. The paper is clearly written and was easy to understand. My major concern is the significance and originality of the proposed method. As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et. al (2017), at scale. But, because from Zhang*s method, I don*t see any significant difficulty in applying to large-scale problems, I*m not sure that this can be a significant contribution. Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models. In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated. Usually, ensemble becomes stronger as the number of model increases. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLarge scale distributed neural network training through online distillation\nTechniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing tokens and based on the Common Crawl repository of web data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss. The proposed method is applied to two large-scale datasets and showed to perform better than other baselines such as label smoothing, and the standard ensemble. The paper is clearly written and was easy to understand. My major concern is the significance and originality of the proposed method. As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et. al (2017), at scale. But, because from Zhang*s method, I don*t see any significant difficulty in applying to large-scale problems, I*m not sure that this can be a significant contribution. Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models. In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated. Usually, ensemble becomes stronger as the number of model increases.",
"role": "assistant"
}
]
| [
" The paper proposes an online distillation method, called co-distillation, where the two different models are trained to match the predictions of other model in addition to minimizing its own loss.",
"The proposed method is applied to two large-scale datasets and showed to perform better than other baselines such as label smoothing, and the standard ensemble.",
"The paper is clearly written and was easy to understand.",
"My major concern is the significance and originality of the proposed method.",
"As written by the authors, the main contribution of the paper is to apply the codistillation method, which is pretty similar to Zhang et.",
"al (2017), at scale.",
"But, because from Zhang*s method, I don*t see any significant difficulty in applying to large-scale problems, I*m not sure that this can be a significant contribution.",
"Rather, I think, it would have been better for the authors to apply the proposed methods to a smaller scale problems as well in order to explore more various aspects of the proposed methods including the effects of number of different models.",
"In this sense, it is also a limitation that the authors showing experiments where only two models are codistillated.",
"Usually, ensemble becomes stronger as the number of model increases."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 9,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 10
} | 0.2 | 0 | 0.4 | 0.9 | 0.2 | 0.1 | 0.3 | 0.1 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.057958 | 0.142042 |
Fraternal Dropout | Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks. | The proposed method fraternal dropout is a stochastic alternative of the expectation-linear dropout method, where part of the objective is for the dropout mask to have low variance. The first order way to achieve lower variance is to have smaller weights. The second order is by having more evenly spread weights, so there is more concentration around the mean. As a result, it seems that at least part of the effect of explicitly reducing the variance is just stronger weight penalty. The effect of dropout in the first place is the opposite, where variance is introduced deliberately. So I would like to see some comparisons between this method and various dropout rates, and regular weight penalty combinations. This work is very closely related to expectation linear dropout, except that you are now actually minimizing the variance: 1/2E[ ||f(s) - f(s*)|| ] is used instead of E [ ||f(s) - f_bar|| ]. Eq 5 is very close to this, except the f_bar is not quite the mean, but the value with the mean dropout mask. So all the results should be compared with ELD. I do not think the method is theoretically well-motivated as presented, but the empirical results seem solid. It is somewhat alarming how the analysis has little to do with the neural networks and how dropout works, let along RNNs, while the strength of the empirical results are all on RNNs. I feel the ideas interesting and valuable especially in light of strong empirical results, but the authors should do more to clarify what is actually happening. Minor: why use s_i and s_j, when there is never any reference to i and j? As far as I can tell, i and j serve as constants, more like s_1 and s_2. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFraternal Dropout\nRecurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The proposed method fraternal dropout is a stochastic alternative of the expectation-linear dropout method, where part of the objective is for the dropout mask to have low variance. The first order way to achieve lower variance is to have smaller weights. The second order is by having more evenly spread weights, so there is more concentration around the mean. As a result, it seems that at least part of the effect of explicitly reducing the variance is just stronger weight penalty. The effect of dropout in the first place is the opposite, where variance is introduced deliberately. So I would like to see some comparisons between this method and various dropout rates, and regular weight penalty combinations. This work is very closely related to expectation linear dropout, except that you are now actually minimizing the variance: 1/2E[ ||f(s) - f(s*)|| ] is used instead of E [ ||f(s) - f_bar|| ]. Eq 5 is very close to this, except the f_bar is not quite the mean, but the value with the mean dropout mask. So all the results should be compared with ELD. I do not think the method is theoretically well-motivated as presented, but the empirical results seem solid. It is somewhat alarming how the analysis has little to do with the neural networks and how dropout works, let along RNNs, while the strength of the empirical results are all on RNNs. I feel the ideas interesting and valuable especially in light of strong empirical results, but the authors should do more to clarify what is actually happening. Minor: why use s_i and s_j, when there is never any reference to i and j? As far as I can tell, i and j serve as constants, more like s_1 and s_2.",
"role": "assistant"
}
]
| [
" The proposed method fraternal dropout is a stochastic alternative of the expectation-linear dropout method, where part of the objective is for the dropout mask to have low variance.",
"The first order way to achieve lower variance is to have smaller weights.",
"The second order is by having more evenly spread weights, so there is more concentration around the mean.",
"As a result, it seems that at least part of the effect of explicitly reducing the variance is just stronger weight penalty.",
"The effect of dropout in the first place is the opposite, where variance is introduced deliberately.",
"So I would like to see some comparisons between this method and various dropout rates, and regular weight penalty combinations.",
"This work is very closely related to expectation linear dropout, except that you are now actually minimizing the variance: 1/2E[ ||f(s) - f(s*)|| ] is used instead of E [ ||f(s) - f_bar|| ].",
"Eq 5 is very close to this, except the f_bar is not quite the mean, but the value with the mean dropout mask.",
"So all the results should be compared with ELD.",
"I do not think the method is theoretically well-motivated as presented, but the empirical results seem solid.",
"It is somewhat alarming how the analysis has little to do with the neural networks and how dropout works, let along RNNs, while the strength of the empirical results are all on RNNs.",
"I feel the ideas interesting and valuable especially in light of strong empirical results, but the authors should do more to clarify what is actually happening.",
"Minor: why use s_i and s_j, when there is never any reference to i and j?",
"As far as I can tell, i and j serve as constants, more like s_1 and s_2."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 8,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 10,
"suggestion_and_solution": 4,
"total": 14
} | 0.214286 | 0.071429 | 0.071429 | 0.571429 | 0.142857 | 0.285714 | 0.714286 | 0.285714 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.357143 | 2.34136 | 0.015782 |
Debiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference | The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders. Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference | This paper provides an interesting analysis of the importance sampled estimate of the LL bound and proposes to use Jackknife to correct for the bias. The experiments show that the proposed method works for model evaluation and that computing the correction is archivable at a reasonable computational cost. It also contains an insightful analysis. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDebiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference\nThe importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders. Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper provides an interesting analysis of the importance sampled estimate of the LL bound and proposes to use Jackknife to correct for the bias. The experiments show that the proposed method works for model evaluation and that computing the correction is archivable at a reasonable computational cost. It also contains an insightful analysis.",
"role": "assistant"
}
]
| [
" This paper provides an interesting analysis of the importance sampled estimate of the LL bound and proposes to use Jackknife to correct for the bias.",
"The experiments show that the proposed method works for model evaluation and that computing the correction is archivable at a reasonable computational cost.",
"It also contains an insightful analysis."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 3
} | 0 | 0 | 0.333333 | 1 | 0.666667 | 0 | 0.333333 | 0 | 3 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.333333 | 0.75509 | 1.578243 |
Debiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference | The importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders. Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference | The authors analyze the bias and variance of the IWAE bound from Burda et al. (2015), and with explicit formulas up to vanishing polynomial terms and intractable moments. This leads them to derive a jacknife approach to estimate the moments as a way to debias the IWAE for finite importance weighted samples. They apply it for training and also as an evaluation method to assess the marginal likelihood at test time. The paper is well-written and offers an interesting combination of ideas motivated from statistical analysis. Following classical results from the debiasing literature, they show a jacknife approach has reduced bias (unknown for variance). In practice, this involves an enumerated subset of calculations leading to a linear cost with respect to the number of samples which I*m inclined to agree is not too expensive. The experiments are unfortunately limited to binarized MNIST. Also, all benchmarks measure lower bound estimates with respect to importance samples, when it*s more accurate to measure with respect to runtime. This would be far more convincing as a way to explain how that constant to the linear-time affects computation in practice. The same would be useful to compare the estimate of the marginal likelihood over training runtime. Also, I wasn*t sure if the JVI estimator still produced a lower bound to make the comparisons. It would be useful if the authors could clarify these details. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDebiasing Evidence Approximations: On Importance-weighted Autoencoders and Jackknife Variational Inference\nThe importance-weighted autoencoder (IWAE) approach of Burda et al. defines a sequence of increasingly tighter bounds on the marginal likelihood of latent variable models. Recently, Cremer et al. reinterpreted the IWAE bounds as ordinary variational evidence lower bounds (ELBO) applied to increasingly accurate variational distributions. In this work, we provide yet another perspective on the IWAE bounds. We interpret each IWAE bound as a biased estimator of the true marginal likelihood where for the bound defined on samples we show the bias to be of order O(1/K). In our theoretical analysis of the IWAE objective we derive asymptotic bias and variance expressions. Based on this analysis we develop jackknife variational inference (JVI), a family of bias-reduced estimators reducing the bias to for any given m < K while retaining computational efficiency. Finally, we demonstrate that JVI leads to improved evidence estimates in variational autoencoders. We also report first results on applying JVI to learning variational autoencoders. Our implementation is available at https://github.com/Microsoft/jackknife-variational-inference",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors analyze the bias and variance of the IWAE bound from Burda et al. (2015), and with explicit formulas up to vanishing polynomial terms and intractable moments. This leads them to derive a jacknife approach to estimate the moments as a way to debias the IWAE for finite importance weighted samples. They apply it for training and also as an evaluation method to assess the marginal likelihood at test time. The paper is well-written and offers an interesting combination of ideas motivated from statistical analysis. Following classical results from the debiasing literature, they show a jacknife approach has reduced bias (unknown for variance). In practice, this involves an enumerated subset of calculations leading to a linear cost with respect to the number of samples which I*m inclined to agree is not too expensive. The experiments are unfortunately limited to binarized MNIST. Also, all benchmarks measure lower bound estimates with respect to importance samples, when it*s more accurate to measure with respect to runtime. This would be far more convincing as a way to explain how that constant to the linear-time affects computation in practice. The same would be useful to compare the estimate of the marginal likelihood over training runtime. Also, I wasn*t sure if the JVI estimator still produced a lower bound to make the comparisons. It would be useful if the authors could clarify these details.",
"role": "assistant"
}
]
| [
" The authors analyze the bias and variance of the IWAE bound from Burda et al.",
"(2015), and with explicit formulas up to vanishing polynomial terms and intractable moments.",
"This leads them to derive a jacknife approach to estimate the moments as a way to debias the IWAE for finite importance weighted samples.",
"They apply it for training and also as an evaluation method to assess the marginal likelihood at test time.",
"The paper is well-written and offers an interesting combination of ideas motivated from statistical analysis.",
"Following classical results from the debiasing literature, they show a jacknife approach has reduced bias (unknown for variance).",
"In practice, this involves an enumerated subset of calculations leading to a linear cost with respect to the number of samples which I*m inclined to agree is not too expensive.",
"The experiments are unfortunately limited to binarized MNIST.",
"Also, all benchmarks measure lower bound estimates with respect to importance samples, when it*s more accurate to measure with respect to runtime.",
"This would be far more convincing as a way to explain how that constant to the linear-time affects computation in practice.",
"The same would be useful to compare the estimate of the marginal likelihood over training runtime.",
"Also, I wasn*t sure if the JVI estimator still produced a lower bound to make the comparisons.",
"It would be useful if the authors could clarify these details."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 8,
"suggestion_and_solution": 3,
"total": 13
} | 0.230769 | 0 | 0.076923 | 0.692308 | 0.076923 | 0.307692 | 0.615385 | 0.230769 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.230769 | 2.230769 | 0 |
Generative networks as inverse problems with Scattering transforms | Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder. | After a first manuscript that needed majors edits, the revised version offers an interesting GAN approach based the scattering transform. Approach is well motivated with proper references to the recent literature. Experiments are not state of the art but clearly demonstrate that the proposed approach does provide meaningful results. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerative networks as inverse problems with Scattering transforms\nGenerative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n After a first manuscript that needed majors edits, the revised version offers an interesting GAN approach based the scattering transform. Approach is well motivated with proper references to the recent literature. Experiments are not state of the art but clearly demonstrate that the proposed approach does provide meaningful results.",
"role": "assistant"
}
]
| [
" After a first manuscript that needed majors edits, the revised version offers an interesting GAN approach based the scattering transform.",
"Approach is well motivated with proper references to the recent literature.",
"Experiments are not state of the art but clearly demonstrate that the proposed approach does provide meaningful results."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 2,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 3
} | 0 | 0 | 0.666667 | 0.666667 | 0.666667 | 0 | 0.333333 | 0.333333 | 3 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.666667 | 1.088424 | 1.578243 |
Lifelong Word Embedding via Meta-Learning | Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledgesfootnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks. | This paper presents a lifelong learning method for learning word embeddings. Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings. The general problem space here -- how to leverage embeddings across several domains in order to improve performance in a given domain -- is important and relevant to ICLR. However, this submission needs to be improved in terms of clarity and its experiments. In terms of clarity, the paper has a large number of typos (I list a few at the end of this review) and more significantly, at several points in the paper is hard to tell what exactly was done and why. When presenting algorithms, starting with an English description of the high-level goal and steps of the algorithm would be helpful. What are the inputs and outputs of the meta-learner, and how will it be used to obtain embeddings for the new domain? The paper states the purpose of the meta learning is "to learn a general word context similarity from the first m domains", but I was never sure what this meant. Further, some of the paper*s pseudocode includes unexplained steps like "invert by domain index" and "scanco-occurrence". In terms of the experiments, the paper is missing some important baselines that would help us understand how well the approach works. First, besides the GloVe common crawl embeddings used here, there are several other embedding sets (including the other GloVe embeddings released along with the ones used here, and the Google News word2vec embeddings) that should be considered. Also, the paper never considers concatenations of large pre-trained embedding sets with each other and/or with the new domain corpus -- such concatenations often give a big boost to accuracy, see : "Think Globally, Embed Locally—Locally Linear Meta-embedding of Words", Bollegala et al., 2017 https://arxiv.org/pdf/1709.06671.pdf That paper is not peer reviewed to my knowledge so it is not necessary to compare against the new methods introduced there, but their baselines of concatenation of pre-trained embedding sets should be compared against in the submission. Beyond trying other embeddings, the paper should also compare against simpler combination approaches, including simpler variants of its own approach. What if we just selected the one past domain that was most similar to the new domain, by some measure? And how does the performance of the technique depend on the setting of m? Investigating some of these questions would help us understand how well the approach works and in which settings. Minor: Second paragraph, GloVec should be GloVe "given many domains with uncertain noise for the new domain" -- not clear what "uncertain noise" means, perhaps "uncertain relevance" would be more clear The text refers to a Figure 3 which does not exist, probably means Figure 2. I didn*t understand the need for both figures, Figure 1 is almost contained within Figure 2 When m is introduced, it would help to say that m < n and justify why dividing the n domains into two chunks (of m and n-m domains) is necessary. "from the first m domain corpus" -> "from the first m domains"? "may not helpful" -> "may not be helpful" "vocabularie" -> "vocabulary" "system first retrieval" -> "system first retrieves" COMMENTS ON REVISIONS: I appreciate the authors including the new experiments against concatenation baselines. The concatenation does fairly comparably to LL in Tables 3&4. LL wins by a bit more in Table 2. Given these somewhat close/inconsistent wins, it would help the paper to include an explanation of why and under what conditions the LL approach will outperform concatenation. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLifelong Word Embedding via Meta-Learning\nLearning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledgesfootnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a lifelong learning method for learning word embeddings. Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings. The general problem space here -- how to leverage embeddings across several domains in order to improve performance in a given domain -- is important and relevant to ICLR. However, this submission needs to be improved in terms of clarity and its experiments. In terms of clarity, the paper has a large number of typos (I list a few at the end of this review) and more significantly, at several points in the paper is hard to tell what exactly was done and why. When presenting algorithms, starting with an English description of the high-level goal and steps of the algorithm would be helpful. What are the inputs and outputs of the meta-learner, and how will it be used to obtain embeddings for the new domain? The paper states the purpose of the meta learning is \"to learn a general word context similarity from the first m domains\", but I was never sure what this meant. Further, some of the paper*s pseudocode includes unexplained steps like \"invert by domain index\" and \"scanco-occurrence\". In terms of the experiments, the paper is missing some important baselines that would help us understand how well the approach works. First, besides the GloVe common crawl embeddings used here, there are several other embedding sets (including the other GloVe embeddings released along with the ones used here, and the Google News word2vec embeddings) that should be considered. Also, the paper never considers concatenations of large pre-trained embedding sets with each other and/or with the new domain corpus -- such concatenations often give a big boost to accuracy, see : \"Think Globally, Embed Locally—Locally Linear Meta-embedding of Words\", Bollegala et al., 2017 https://arxiv.org/pdf/1709.06671.pdf That paper is not peer reviewed to my knowledge so it is not necessary to compare against the new methods introduced there, but their baselines of concatenation of pre-trained embedding sets should be compared against in the submission. Beyond trying other embeddings, the paper should also compare against simpler combination approaches, including simpler variants of its own approach. What if we just selected the one past domain that was most similar to the new domain, by some measure? And how does the performance of the technique depend on the setting of m? Investigating some of these questions would help us understand how well the approach works and in which settings. Minor: Second paragraph, GloVec should be GloVe \"given many domains with uncertain noise for the new domain\" -- not clear what \"uncertain noise\" means, perhaps \"uncertain relevance\" would be more clear The text refers to a Figure 3 which does not exist, probably means Figure 2. I didn*t understand the need for both figures, Figure 1 is almost contained within Figure 2 When m is introduced, it would help to say that m < n and justify why dividing the n domains into two chunks (of m and n-m domains) is necessary. \"from the first m domain corpus\" -> \"from the first m domains\"? \"may not helpful\" -> \"may not be helpful\" \"vocabularie\" -> \"vocabulary\" \"system first retrieval\" -> \"system first retrieves\" COMMENTS ON REVISIONS: I appreciate the authors including the new experiments against concatenation baselines. The concatenation does fairly comparably to LL in Tables 3&4. LL wins by a bit more in Table 2. Given these somewhat close/inconsistent wins, it would help the paper to include an explanation of why and under what conditions the LL approach will outperform concatenation.",
"role": "assistant"
}
]
| [
" This paper presents a lifelong learning method for learning word embeddings.",
"Given a new domain of interest, the method leverages previously seen domains in order to hopefully generate better embeddings compared to ones computed over just the new domain, or standard pre-trained embeddings.",
"The general problem space here -- how to leverage embeddings across several domains in order to improve performance in a given domain -- is important and relevant to ICLR.",
"However, this submission needs to be improved in terms of clarity and its experiments.",
"In terms of clarity, the paper has a large number of typos (I list a few at the end of this review) and more significantly, at several points in the paper is hard to tell what exactly was done and why.",
"When presenting algorithms, starting with an English description of the high-level goal and steps of the algorithm would be helpful.",
"What are the inputs and outputs of the meta-learner, and how will it be used to obtain embeddings for the new domain?",
"The paper states the purpose of the meta learning is \"to learn a general word context similarity from the first m domains\", but I was never sure what this meant.",
"Further, some of the paper*s pseudocode includes unexplained steps like \"invert by domain index\" and \"scanco-occurrence\".",
"In terms of the experiments, the paper is missing some important baselines that would help us understand how well the approach works.",
"First, besides the GloVe common crawl embeddings used here, there are several other embedding sets (including the other GloVe embeddings released along with the ones used here, and the Google News word2vec embeddings) that should be considered.",
"Also, the paper never considers concatenations of large pre-trained embedding sets with each other and/or with the new domain corpus -- such concatenations often give a big boost to accuracy, see : \"Think Globally, Embed Locally—Locally Linear Meta-embedding of Words\", Bollegala et al., 2017 https://arxiv.org/pdf/1709.06671.pdf That paper is not peer reviewed to my knowledge so it is not necessary to compare against the new methods introduced there, but their baselines of concatenation of pre-trained embedding sets should be compared against in the submission.",
"Beyond trying other embeddings, the paper should also compare against simpler combination approaches, including simpler variants of its own approach.",
"What if we just selected the one past domain that was most similar to the new domain, by some measure?",
"And how does the performance of the technique depend on the setting of m?",
"Investigating some of these questions would help us understand how well the approach works and in which settings.",
"Minor: Second paragraph, GloVec should be GloVe \"given many domains with uncertain noise for the new domain\" -- not clear what \"uncertain noise\" means, perhaps \"uncertain relevance\" would be more clear The text refers to a Figure 3 which does not exist, probably means Figure 2.",
"I didn*t understand the need for both figures, Figure 1 is almost contained within Figure 2 When m is introduced, it would help to say that m < n and justify why dividing the n domains into two chunks (of m and n-m domains) is necessary.",
"\"from the first m domain corpus\" -> \"from the first m domains\"?",
"\"may not helpful\" -> \"may not be helpful\" \"vocabularie\" -> \"vocabulary\" \"system first retrieval\" -> \"system first retrieves\" COMMENTS ON REVISIONS: I appreciate the authors including the new experiments against concatenation baselines.",
"The concatenation does fairly comparably to LL in Tables 3&4.",
"LL wins by a bit more in Table 2.",
"Given these somewhat close/inconsistent wins, it would help the paper to include an explanation of why and under what conditions the LL approach will outperform concatenation."
] | {
"criticism": 7,
"example": 5,
"importance_and_relevance": 4,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 12,
"results_and_discussion": 1,
"suggestion_and_solution": 10,
"total": 23
} | 0.304348 | 0.217391 | 0.173913 | 0.478261 | 0.130435 | 0.521739 | 0.043478 | 0.434783 | 23 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.304348 | 0.726105 | 1.578243 |
Lifelong Word Embedding via Meta-Learning | Learning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledgesfootnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks. | In this paper, the authors proposed to learn word embedding for the target domain in the lifelong learning manner. The basic idea is to learn a so-call meter learner to measure similarities of the same words between the target domain and the source domains for help learning word embedding for the target domain with a small corpus. Overall, the descriptions of the proposed model (Section 3 - Section 5) are hard to follow. This is not because the proposed model is technically difficult to understand. On the contrary, the model is heuristic, and simple, but the descriptions are unclear. Section 3 is supposed to give an overview and high-level introduction of the whole model using the Figure 1, and Figure 2 (not Figure 3 mentioned in text). However, after reading Section 3, I do not catch any useful information about the proposed model expect for knowing that a so-called meta learner is used. Section 4 and Section 5 are supposed to give details of different components of the proposed model and explain the motivations. However, descriptions in these two sections are very confusing, e.g, many symbols in Algorithm 1 are presented with any descriptions. Moreover, the motivations behind the proposed methods for different components are missing. Also, a lot of types make the descriptions more difficult to follow, e.g., "may not helpful or even harmful", *"Figure 3", "we show this Section 6", "large size a vocabulary", etc. Another major concern is that the technical contributions of the proposed model is quite limited. The only technical contributions are (4) and the way to construct the co-occurrence information A. However, such contributions are quite minor, and technically heuristic. Moreover, regarding the aggregation layer in the pairwise network, it is similar to feature engineering. In this case, why not just train a flat classifier, like logistic regression, with rich feature engineering, in stead of using a neural network. Regarding experiments, one straight-forward baseline is missing. As n domains are supposed to be given in advance before the n+1 domain (target domain) comes, one can use multi-domain learning approaches with ensemble learning techniques to learn word embedding for the target domain. For instance, one can learn n pairwise (1 out of n sources + the target) cross-domain word embedding, and combine them using the similarity between each source and the target as the weight. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLifelong Word Embedding via Meta-Learning\nLearning high-quality word embeddings is of significant importance in achieving better performance in many down-stream learning tasks. On one hand, traditional word embeddings are trained on a large scale corpus for general-purpose tasks, which are often sub-optimal for many domain-specific tasks. On the other hand, many domain-specific tasks do not have a large enough domain corpus to obtain high-quality embeddings. We observe that domains are not isolated and a small domain corpus can leverage the learned knowledge from many past domains to augment that corpus in order to generate high-quality embeddings. In this paper, we formulate the learning of word embeddings as a lifelong learning process. Given knowledge learned from many previous domains and a small new domain corpus, the proposed method can effectively generate new domain embeddings by leveraging a simple but effective algorithm and a meta-learner, where the meta-learner is able to provide word context similarity information at the domain-level. Experimental results demonstrate that the proposed method can effectively learn new domain embeddings from a small corpus and past domain knowledgesfootnote{We will release the code after final revisions.}. We also demonstrate that general-purpose embeddings trained from a large scale corpus are sub-optimal in domain-specific tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors proposed to learn word embedding for the target domain in the lifelong learning manner. The basic idea is to learn a so-call meter learner to measure similarities of the same words between the target domain and the source domains for help learning word embedding for the target domain with a small corpus. Overall, the descriptions of the proposed model (Section 3 - Section 5) are hard to follow. This is not because the proposed model is technically difficult to understand. On the contrary, the model is heuristic, and simple, but the descriptions are unclear. Section 3 is supposed to give an overview and high-level introduction of the whole model using the Figure 1, and Figure 2 (not Figure 3 mentioned in text). However, after reading Section 3, I do not catch any useful information about the proposed model expect for knowing that a so-called meta learner is used. Section 4 and Section 5 are supposed to give details of different components of the proposed model and explain the motivations. However, descriptions in these two sections are very confusing, e.g, many symbols in Algorithm 1 are presented with any descriptions. Moreover, the motivations behind the proposed methods for different components are missing. Also, a lot of types make the descriptions more difficult to follow, e.g., \"may not helpful or even harmful\", *\"Figure 3\", \"we show this Section 6\", \"large size a vocabulary\", etc. Another major concern is that the technical contributions of the proposed model is quite limited. The only technical contributions are (4) and the way to construct the co-occurrence information A. However, such contributions are quite minor, and technically heuristic. Moreover, regarding the aggregation layer in the pairwise network, it is similar to feature engineering. In this case, why not just train a flat classifier, like logistic regression, with rich feature engineering, in stead of using a neural network. Regarding experiments, one straight-forward baseline is missing. As n domains are supposed to be given in advance before the n+1 domain (target domain) comes, one can use multi-domain learning approaches with ensemble learning techniques to learn word embedding for the target domain. For instance, one can learn n pairwise (1 out of n sources + the target) cross-domain word embedding, and combine them using the similarity between each source and the target as the weight.",
"role": "assistant"
}
]
| [
" In this paper, the authors proposed to learn word embedding for the target domain in the lifelong learning manner.",
"The basic idea is to learn a so-call meter learner to measure similarities of the same words between the target domain and the source domains for help learning word embedding for the target domain with a small corpus.",
"Overall, the descriptions of the proposed model (Section 3 - Section 5) are hard to follow.",
"This is not because the proposed model is technically difficult to understand.",
"On the contrary, the model is heuristic, and simple, but the descriptions are unclear.",
"Section 3 is supposed to give an overview and high-level introduction of the whole model using the Figure 1, and Figure 2 (not Figure 3 mentioned in text).",
"However, after reading Section 3, I do not catch any useful information about the proposed model expect for knowing that a so-called meta learner is used.",
"Section 4 and Section 5 are supposed to give details of different components of the proposed model and explain the motivations.",
"However, descriptions in these two sections are very confusing, e.g, many symbols in Algorithm 1 are presented with any descriptions.",
"Moreover, the motivations behind the proposed methods for different components are missing.",
"Also, a lot of types make the descriptions more difficult to follow, e.g., \"may not helpful or even harmful\", *\"Figure 3\", \"we show this Section 6\", \"large size a vocabulary\", etc.",
"Another major concern is that the technical contributions of the proposed model is quite limited.",
"The only technical contributions are (4) and the way to construct the co-occurrence information A.",
"However, such contributions are quite minor, and technically heuristic.",
"Moreover, regarding the aggregation layer in the pairwise network, it is similar to feature engineering.",
"In this case, why not just train a flat classifier, like logistic regression, with rich feature engineering, in stead of using a neural network.",
"Regarding experiments, one straight-forward baseline is missing.",
"As n domains are supposed to be given in advance before the n+1 domain (target domain) comes, one can use multi-domain learning approaches with ensemble learning techniques to learn word embedding for the target domain.",
"For instance, one can learn n pairwise (1 out of n sources + the target) cross-domain word embedding, and combine them using the similarity between each source and the target as the weight."
] | {
"criticism": 10,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 13,
"praise": 1,
"presentation_and_reporting": 10,
"results_and_discussion": 0,
"suggestion_and_solution": 4,
"total": 19
} | 0.526316 | 0.105263 | 0.052632 | 0.684211 | 0.052632 | 0.526316 | 0 | 0.210526 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2.157895 | 1.589727 | 0.568167 |
On the Generalization Effects of DenseNet Model Structures | Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures* implicit regularization properties? In this work, we investigate the skip connection*s effect on network*s generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with *skip connections*. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased. | The ms analyses a number of simulations how skip connections effect the generalization of different network architectures. The experiments are somewhat interesting but they appear rather preliminary. To indeed show the claims made, error bars in the graphs would be necessary as well will more careful and more generic analysis. In addition clear hypotheses should be stated. The fact that some behaviour is seen in MNIST or CIFAR in the simulations does not permit conclusion for other data sets. Typically extensive teacher student simulations are required to validly make points. Also formally the paper is not in good shape. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn the Generalization Effects of DenseNet Model Structures\nModern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures* implicit regularization properties? In this work, we investigate the skip connection*s effect on network*s generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with *skip connections*. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The ms analyses a number of simulations how skip connections effect the generalization of different network architectures. The experiments are somewhat interesting but they appear rather preliminary. To indeed show the claims made, error bars in the graphs would be necessary as well will more careful and more generic analysis. In addition clear hypotheses should be stated. The fact that some behaviour is seen in MNIST or CIFAR in the simulations does not permit conclusion for other data sets. Typically extensive teacher student simulations are required to validly make points. Also formally the paper is not in good shape.",
"role": "assistant"
}
]
| [
" The ms analyses a number of simulations how skip connections effect the generalization of different network architectures.",
"The experiments are somewhat interesting but they appear rather preliminary.",
"To indeed show the claims made, error bars in the graphs would be necessary as well will more careful and more generic analysis.",
"In addition clear hypotheses should be stated.",
"The fact that some behaviour is seen in MNIST or CIFAR in the simulations does not permit conclusion for other data sets.",
"Typically extensive teacher student simulations are required to validly make points.",
"Also formally the paper is not in good shape."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 7
} | 0.428571 | 0 | 0.142857 | 0.714286 | 0.142857 | 0.285714 | 0.285714 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
Large Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling | A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K. | The paper proposes a new approach to determine learning late for convolutional neural networks. It starts from observation that for batch learning with a fixed number of epochs, the accuracy drops when the batch size is too large. Assuming that the number or epochs and batch size are fixed, the contribution of the paper is a heuristic that assigns different learning late to each layer of a network depending on a ratio of the norms of weights and gradients in a layer. The experimental results show that the proposed heuristic helps AlexNet and ResNet end up in a larger accuracy on ImageNet data. Positives: - the proposed approach is intuitively justified - the experimental results are encouraging Negatives: - the methodological contribution is minor - no attempt is made to theoretically justify the proposed heuristic - the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training - the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach Suggestions: - consider using different abbreviation (LARS is used for least-angle regression) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLarge Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling\nA common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a new approach to determine learning late for convolutional neural networks. It starts from observation that for batch learning with a fixed number of epochs, the accuracy drops when the batch size is too large. Assuming that the number or epochs and batch size are fixed, the contribution of the paper is a heuristic that assigns different learning late to each layer of a network depending on a ratio of the norms of weights and gradients in a layer. The experimental results show that the proposed heuristic helps AlexNet and ResNet end up in a larger accuracy on ImageNet data. Positives: - the proposed approach is intuitively justified - the experimental results are encouraging Negatives: - the methodological contribution is minor - no attempt is made to theoretically justify the proposed heuristic - the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training - the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach Suggestions: - consider using different abbreviation (LARS is used for least-angle regression)",
"role": "assistant"
}
]
| [
" The paper proposes a new approach to determine learning late for convolutional neural networks.",
"It starts from observation that for batch learning with a fixed number of epochs, the accuracy drops when the batch size is too large.",
"Assuming that the number or epochs and batch size are fixed, the contribution of the paper is a heuristic that assigns different learning late to each layer of a network depending on a ratio of the norms of weights and gradients in a layer.",
"The experimental results show that the proposed heuristic helps AlexNet and ResNet end up in a larger accuracy on ImageNet data.",
"Positives: - the proposed approach is intuitively justified - the experimental results are encouraging Negatives: - the methodological contribution is minor - no attempt is made to theoretically justify the proposed heuristic - the method introduces one or two new hyperparameters and it is not clear from the experimental results what overhead is this adding to network training - the experiments are done only on a single data set, which is not sufficient to establish superiority of an approach Suggestions: - consider using different abbreviation (LARS is used for least-angle regression)"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 5
} | 0.2 | 0 | 0.2 | 1 | 0 | 0 | 0.6 | 0.2 | 5 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 2.2 | 1.189924 | 1.010076 |
Large Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling | A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K. | This paper provides an optimization approach for large batch training of CNN with layer-wise adaptive learning rates. It starts from the observation that the ratio between the L2-norm of parameters and that of gradients on parameters varies significantly in the optimization, and then introduce a local learning rate to consider this observation for a more stable and efficient optimization. Experimental results show improvements compared with the state-of-the-art algorithm. Review: (1) Pros The proposed optimization method considers the dynamic self-adjustment of the learning rate in the optimization based on the ratio between the L2-norm of parameters and that of gradients on parameters when the batch size increases, and shows improvements in experiments compared with previous methods. (2) Cons i) LR "warm-up" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR "warm-up" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general? ii) There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term). iii) In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs. iv) Writing should be improved, e.g., the first paragraph in section 6. Some parts are confusing, for example, the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLarge Batch Training of Convolutional Networks with Layer-wise Adaptive Rate Scaling\nA common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with a mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. However, training with a large batch often results in lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome these optimization difficulties, we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled AlexNet and ResNet-50 to a batch size of 16K.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper provides an optimization approach for large batch training of CNN with layer-wise adaptive learning rates. It starts from the observation that the ratio between the L2-norm of parameters and that of gradients on parameters varies significantly in the optimization, and then introduce a local learning rate to consider this observation for a more stable and efficient optimization. Experimental results show improvements compared with the state-of-the-art algorithm. Review: (1) Pros The proposed optimization method considers the dynamic self-adjustment of the learning rate in the optimization based on the ratio between the L2-norm of parameters and that of gradients on parameters when the batch size increases, and shows improvements in experiments compared with previous methods. (2) Cons i) LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach. However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3. So does it mean that the proposed method cannot handle the problem in general? ii) There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate. The authors do not have a detail explanation and experiments about it. In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term). iii) In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs. iv) Writing should be improved, e.g., the first paragraph in section 6. Some parts are confusing, for example, the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02.",
"role": "assistant"
}
]
| [
" This paper provides an optimization approach for large batch training of CNN with layer-wise adaptive learning rates.",
"It starts from the observation that the ratio between the L2-norm of parameters and that of gradients on parameters varies significantly in the optimization, and then introduce a local learning rate to consider this observation for a more stable and efficient optimization.",
"Experimental results show improvements compared with the state-of-the-art algorithm.",
"Review: (1) Pros The proposed optimization method considers the dynamic self-adjustment of the learning rate in the optimization based on the ratio between the L2-norm of parameters and that of gradients on parameters when the batch size increases, and shows improvements in experiments compared with previous methods.",
"(2) Cons i) LR \"warm-up\" can mitigate the unstable training in the initial phase and the proposed method is also motivated by the stability but uses a different approach.",
"However, it seems that the authors also combine with LR \"warm-up\" in your proposed method in the experimental part, e.g., Table 3.",
"So does it mean that the proposed method cannot handle the problem in general?",
"ii) There is one coefficient that is independent from layers and needs to be set manually in the proposed local learning rate.",
"The authors do not have a detail explanation and experiments about it.",
"In fact, as can be seen in the Algorithm 1, this coefficient can be as an independent hyper-parameter (even is put with the global learning rate together as one fix term).",
"iii) In the section 6, when increase the training steps, experiments compared with previous methods should be implemented since they can also get better results with more epochs.",
"iv) Writing should be improved, e.g., the first paragraph in section 6.",
"Some parts are confusing, for example, the authors claim that they use initial LR=0.01, but in Table 1(a) it is 0.02."
] | {
"criticism": 2,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 11,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 8,
"suggestion_and_solution": 4,
"total": 13
} | 0.153846 | 0.153846 | 0.076923 | 0.846154 | 0.076923 | 0.230769 | 0.615385 | 0.307692 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.461538 | 2.461538 | 0 |
Image Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks | We propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function*s components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways. | Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed. Quality and significance: This is quite a technical paper, written in a very compressed form and is a bit hard to follow. Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models. Clarity: I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented. Originality: Seems original. Pros: * Interesting energy formulation and variation over BEGAN Cons: * Not a clear paper * results are only partially motivated and analyzed | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nImage Quality Assessment Techniques Improve Training and Evaluation of Energy-Based Generative Adversarial Networks\nWe propose a new, multi-component energy function for energy-based Generative Adversarial Networks (GANs) based on methods from the image quality assessment literature. Our approach expands on the Boundary Equilibrium Generative Adversarial Network (BEGAN) by outlining some of the short-comings of the original energy and loss functions. We address these short-comings by incorporating an l1 score, the Gradient Magnitude Similarity score, and a chrominance score into the new energy function. We then provide a set of systematic experiments that explore its hyper-parameters. We show that each of the energy function*s components is able to represent a slightly different set of features, which require their own evaluation criteria to assess whether they have been adequately learned. We show that models using the new energy function are able to produce better image representations than the BEGAN model in predicted ways.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term. The model is then trained with CelebA under different parameters settings and results are analyzed. Quality and significance: This is quite a technical paper, written in a very compressed form and is a bit hard to follow. Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models. Clarity: I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented. Originality: Seems original. Pros: * Interesting energy formulation and variation over BEGAN Cons: * Not a clear paper * results are only partially motivated and analyzed",
"role": "assistant"
}
]
| [
" Quick summary: This paper proposes an energy based formulation to the BEGAN model and modifies it to include an image quality assessment based term.",
"The model is then trained with CelebA under different parameters settings and results are analyzed.",
"Quality and significance: This is quite a technical paper, written in a very compressed form and is a bit hard to follow.",
"Mostly it is hard to estimate what is the contribution of the model and how the results differ from baseline models.",
"Clarity: I would say this is one of the weak points of the paper - the paper is not well motivated and the results are not clearly presented.",
"Originality: Seems original.",
"Pros: * Interesting energy formulation and variation over BEGAN Cons: * Not a clear paper * results are only partially motivated and analyzed"
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.571429 | 0 | 0 | 0.571429 | 0.285714 | 0.285714 | 0.571429 | 0.142857 | 7 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
Training GANs with Optimism | We address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam. | This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10. This paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well. The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game. The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest. In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates). OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn*t seem to be even an attempt at explaining this in the paper. I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. This gives the impression that the experimental results might have been at least slightly "gamed". For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality. Specific comments: Page 1, "no-regret dynamics in zero-sum games can very often lead to limit cycles": I don*t think limit cycles are actually ever formally defined in the entire paper. Page 3, "standard results in game theory and no-regret learning": These results should be either proven or cited. Page 3: Don*t the parameter spaces need to be bounded for these convergence results to hold? Page 4, "it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm": For completeness, this should probably either be (quickly) proven or a reference should be provided. Page 5, "the unique equilibrium of the above game is...for the discriminator to choose w=0": Why is w=0 necessary here? Page 6, "We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A": Why is this true? This should either be proven or cited. Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. In the Initialization section, it appears simply to be out of convenience. Page 6, Theorem 1: It should be explicitly stated that this result doesn*t provide a convergence rate, in contrast to the existing OMD results cited in the paper. Page 7, "we considered momentum, Nesterov momentum and AdaGrad": Why isn*t Adam used in this section if it is used in later experiments? Page 7-8, "When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? None of the theoretical arguments presented earlier in the paper seem to even hint at this. The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. However, this doesn*t even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD. Page 8, "Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants": Learning rate sensitivity doesn*t seem to be touched upon in the earlier discussion. Can these results be explained by theory? Page 8, "we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training": These results don*t mean much without error bars. Page 8, "we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....": Why didn*t this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times? After reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7. Just as a note, Figure 3b is now very difficult to read. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTraining GANs with Optimism\nWe address the issue of limit cycling behavior in training Generative Adversarial Networks and propose the use of Optimistic Mirror Decent (OMD) for training Wasserstein GANs. Recent theoretical results have shown that optimistic mirror decent (OMD) can enjoy faster regret rates in the context of zero-sum games. WGANs is exactly a context of solving a zero-sum game with simultaneous no-regret dynamics. Moreover, we show that optimistic mirror decent addresses the limit cycling problem in training WGANs. We formally show that in the case of bi-linear zero-sum games the last iterate of OMD dynamics converges to an equilibrium, in contrast to GD dynamics which are bound to cycle. We also portray the huge qualitative difference between GD and OMD dynamics with toy examples, even when GD is modified with many adaptations proposed in the recent literature, such as gradient penalty or momentum. We apply OMD WGAN training to a bioinformatics problem of generating DNA sequences. We observe that models trained with OMD achieve consistently smaller KL divergence with respect to the true underlying distribution, than models trained with GD variants. Finally, we introduce a new algorithm, Optimistic Adam, which is an optimistic variant of Adam. We apply it to WGAN training on CIFAR10 and observe improved performance in terms of inception score as compared to Adam.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS). The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games. Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well. The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training. They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10. This paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games. Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well. The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013. Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game. The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest. In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates). OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn*t seem to be even an attempt at explaining this in the paper. I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5. This gives the impression that the experimental results might have been at least slightly \"gamed\". For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality. Specific comments: Page 1, \"no-regret dynamics in zero-sum games can very often lead to limit cycles\": I don*t think limit cycles are actually ever formally defined in the entire paper. Page 3, \"standard results in game theory and no-regret learning\": These results should be either proven or cited. Page 3: Don*t the parameter spaces need to be bounded for these convergence results to hold? Page 4, \"it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm\": For completeness, this should probably either be (quickly) proven or a reference should be provided. Page 5, \"the unique equilibrium of the above game is...for the discriminator to choose w=0\": Why is w=0 necessary here? Page 6, \"We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A\": Why is this true? This should either be proven or cited. Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result. In the Initialization section, it appears simply to be out of convenience. Page 6, Theorem 1: It should be explicitly stated that this result doesn*t provide a convergence rate, in contrast to the existing OMD results cited in the paper. Page 7, \"we considered momentum, Nesterov momentum and AdaGrad\": Why isn*t Adam used in this section if it is used in later experiments? Page 7-8, \"When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.\": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set? None of the theoretical arguments presented earlier in the paper seem to even hint at this. The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch. However, this doesn*t even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD. Page 8, \"Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants\": Learning rate sensitivity doesn*t seem to be touched upon in the earlier discussion. Can these results be explained by theory? Page 8, \"we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training\": These results don*t mean much without error bars. Page 8, \"we only trained the discriminator once after one iteration of generator training. The latter is inline with the intuition behind the use of optimism....\": Why didn*t this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times? After reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7. Just as a note, Figure 3b is now very difficult to read.",
"role": "assistant"
}
]
| [
" This paper proposes the use of optimistic mirror descent to train Wasserstein Generative Adversarial Networks (WGANS).",
"The authors remark that the current training of GANs, which amounts to solving a zero-sum game between a generator and discriminator, is often unstable, and they argue that one source of instability is due to limit cycles, which can occur for FTRL-based algorithms even in convex-concave zero-sum games.",
"Motivated by recent results that use Optimistic Mirror Descent (OMD) to achieve faster convergence rates (than standard gradient descent) in convex-concave zero-sum games and normal form games, they suggest using these techniques for WGAN training as well.",
"The authors prove that, using OMD, the last iterate converges to an equilibrium and use this as motivation that OMD methods should be more stable for WGAN training.",
"They then compare OMD against GD on both toy simulations and a DNA sequence task before finally introducing an adaptive generalization of OMD, Optimistic Adam, that they test on CIFAR10.",
"This paper is relatively well-written and clear, and the authors do a good job of introducing the problem of GAN training instability as well as the OMD algorithm, in particular highlighting its differences with standard gradient descent as well as discussing existing work that has applied it to zero-sum games.",
"Given the recent work on OMD for zero-sum and normal form games, it is natural to study its effectiveness in training GANs.The issue of last iterate versus average iterate for non convex-concave problems is also presented well.",
"The theoretical result on last-iterate convergence of OMD for bilinear games is interesting, but somewhat wanting as it does not provide an explicit convergence rate as in Rakhlin and Sridharan, 2013.",
"Moreover, the result is only at best a motivation for using OMD in WGAN training since the WGAN optimization problem is not a bilinear game.",
"The experimental results seem to indicate that OMD is at least roughly competitive with GD-based methods, although they seem less compelling than the prior discussion in the paper would suggest.",
"In particular, they are matched by SGD with momentum when evaluated by last epoch performance (albeit while being less sensitive to learning rates).",
"OMD does seem to outperform SGD-based methods when using the lowest discriminator loss, but there doesn*t seem to be even an attempt at explaining this in the paper.",
"I found it a bit odd that Adam was not used as a point of comparison in Section 5, that optimistic Adam was only introduced and tested for CIFAR but not for the DNA sequence problem, and that the discriminator was trained for 5 iterations in Section 5 but only once in Section 6, despite the fact that the reasoning provided in Section 6 seems like it would have also applied for Section 5.",
"This gives the impression that the experimental results might have been at least slightly \"gamed\".",
"For the reasons above, I give the paper high marks on clarity, and slightly above average marks on originality, significance, and quality.",
"Specific comments: Page 1, \"no-regret dynamics in zero-sum games can very often lead to limit cycles\": I don*t think limit cycles are actually ever formally defined in the entire paper.",
"Page 3, \"standard results in game theory and no-regret learning\": These results should be either proven or cited.",
"Page 3: Don*t the parameter spaces need to be bounded for these convergence results to hold?",
"Page 4, \"it is well known that GD is equivalent to the Follow-the-Regularized-Leader algorithm\": For completeness, this should probably either be (quickly) proven or a reference should be provided.",
"Page 5, \"the unique equilibrium of the above game is...for the discriminator to choose w=0\": Why is w=0 necessary here?",
"Page 6, \"We remark that the set of equilibrium solutions of this minimax problem are pairs (x,y) such that x is in the null space of A^T and y is in the null space of A\": Why is this true?",
"This should either be proven or cited.",
"Page 6, Initialization and Theorem 1: It would be good to discuss the necessity of this particular choice of initialization for the theoretical result.",
"In the Initialization section, it appears simply to be out of convenience.",
"Page 6, Theorem 1: It should be explicitly stated that this result doesn*t provide a convergence rate, in contrast to the existing OMD results cited in the paper.",
"Page 7, \"we considered momentum, Nesterov momentum and AdaGrad\": Why isn*t Adam used in this section if it is used in later experiments?",
"Page 7-8, \"When evaluated by....the lowest discriminator loss on the validation set, WGAN trained with Stochastic OMD (SOMD) achieved significantly lower KL divergence than the competing SGD variants.",
"\": Can you explain why SOMD outperforms the other methods when using the lowest discriminator loss on the validation set?",
"None of the theoretical arguments presented earlier in the paper seem to even hint at this.",
"The only result that one might expect from the earlier discussion and results is that SOMD would outperform the other methods when evaluating by the last epoch.",
"However, this doesn*t even really hold, since there exist learning rates in which SGD with momentum matches the performance of SOMD.",
"Page 8, \"Evaluated by the last epoch, SOMD is much less sensitive to the choice of learning rate than the SGD variants\": Learning rate sensitivity doesn*t seem to be touched upon in the earlier discussion.",
"Can these results be explained by theory?",
"Page 8, \"we see that optimistic Adam achieves high numbers of inception scores after very few epochs of training\": These results don*t mean much without error bars.",
"Page 8, \"we only trained the discriminator once after one iteration of generator training.",
"The latter is inline with the intuition behind the use of optimism....\": Why didn*t this logic apply to the previous section on DNA sequences, where the discriminator was trained multiple times?",
"After reading the response of the authors (in particular their clarification of some technical results and the extra experiments they carried out during the rebuttal period), I have decided to upgrade my rating of the paper from a 6 to a 7.",
"Just as a note, Figure 3b is now very difficult to read."
] | {
"criticism": 11,
"example": 14,
"importance_and_relevance": 3,
"materials_and_methods": 22,
"praise": 3,
"presentation_and_reporting": 9,
"results_and_discussion": 21,
"suggestion_and_solution": 7,
"total": 38
} | 0.289474 | 0.368421 | 0.078947 | 0.578947 | 0.078947 | 0.236842 | 0.552632 | 0.184211 | 38 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.368421 | -7.495598 | 9.864019 |
Learning Weighted Representations for Generalization Across Designs | Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation. | This paper proposes a deep learning architecture for joint learning of feature representation, a target-task mapping function, and a sample re-weighting function. Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs. Overall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs. The main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift. However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Weighted Representations for Generalization Across Designs\nPredictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a deep learning architecture for joint learning of feature representation, a target-task mapping function, and a sample re-weighting function. Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs. Overall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs. The main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift. However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section.",
"role": "assistant"
}
]
| [
" This paper proposes a deep learning architecture for joint learning of feature representation, a target-task mapping function, and a sample re-weighting function.",
"Specifically, the method tries to discover feature representations, which are invariance in different domains, by minimizing the re-weighted empirical risk and distributional shift between designs.",
"Overall, the paper is well written and organized with good description on the related work, research background, and theoretic proofs.",
"The main contribution can be the idea of learning a sample re-weighting function, which is highly important in domain shift.",
"However, as stated in the paper, since the causal effect of an intervention T on Y conditioned on X is one of main interests, it is expected to add the related analysis in the experiment section."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 5
} | 0 | 0 | 0.4 | 0.8 | 0.2 | 0.2 | 0.6 | 0.2 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.4 | 1.389924 | 1.010076 |
Learning Weighted Representations for Generalization Across Designs | Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation. | The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still "work" and the distribution of X can also change without affecting the accuracy of the predictions. What is proposed seems to be twofold: - instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound. The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size. - as an additional ingredient the authors also propose "representation learning" by mapping x to some representation Phi(x). The goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance. Pros: - The problem is relevant and also appears in similar form in domain adaptation and transfer learning. - The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al. Cons: - I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others. - The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense. - The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_.Phi, C^.mathcal{F}_{n,.delta} and D^{.Phi,.mathcal{H}}_.delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting? - I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Weighted Representations for Generalization Across Designs\nPredictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the generalization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic framework inspired by our bound and verify is effectiveness in causal effect estimation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X. The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x. If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still \"work\" and the distribution of X can also change without affecting the accuracy of the predictions. What is proposed seems to be twofold: - instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound. The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size. - as an additional ingredient the authors also propose \"representation learning\" by mapping x to some representation Phi(x). The goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance. Pros: - The problem is relevant and also appears in similar form in domain adaptation and transfer learning. - The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al. Cons: - I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others. - The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail. Especially the second half of page 5 is at times very hard to understand as it is so dense. - The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_.Phi, C^.mathcal{F}_{n,.delta} and D^{.Phi,.mathcal{H}}_.delta. Why would we expect these quantities to be small or bounded? How does that compare to the assumptions needed for standard inverse probability weighting? - I appreciate that it is difficult to find good test datasets for evaluating causal estimator. The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?). The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally.",
"role": "assistant"
}
]
| [
" The paper proposes a novel way of causal inference in situations where in causal SEM notation the outcome Y = f(T,X) is a function of a treatment T and covariates X.",
"The goal is to infer the treatment effect E(Y|T=1,X=x) - E(Y|T=0,X=x) for binary treatments at every location x.",
"If the treatment effect can be learned, then forecasts of Y under new policies that assign treatment conditional on X will still \"work\" and the distribution of X can also change without affecting the accuracy of the predictions.",
"What is proposed seems to be twofold: - instead of using a standard inverse probability weighting, the authors construct a bound for the prediction performance under new distributions of X and new policies and learn the weights by optimizing this bound.",
"The goal is to avoid issues that arise if the ratio between source and target densities become very large or small and the weights in a standard approach would become very sparse, thus leading to a small effective sample size.",
"- as an additional ingredient the authors also propose \"representation learning\" by mapping x to some representation Phi(x).",
"The goal is to learn the mapping Phi (and its inverse) and the weighting function simultaneously by optimizing the derived bound on the prediction performance.",
"Pros: - The problem is relevant and also appears in similar form in domain adaptation and transfer learning.",
"- The derived bounds and procedures are interesting and nontrivial, even if there is some overlap with earlier work of Shalit et al.",
"Cons: - I am not sure if ICLR is the optimal venue for this manuscript but will leave this decision to others.",
"- The manuscript is written in a very compact style and I wish some passages would have been explained in more depth and detail.",
"Especially the second half of page 5 is at times very hard to understand as it is so dense.",
"- The implications of the assumptions in Theorem 1 are not easy to understand, especially relating to the quantities B_.Phi, C^.mathcal{F}_{n,.delta} and D^{.Phi,.mathcal{H}}_.delta.",
"Why would we expect these quantities to be small or bounded?",
"How does that compare to the assumptions needed for standard inverse probability weighting?",
"- I appreciate that it is difficult to find good test datasets for evaluating causal estimator.",
"The experiment on the semi-synthetic IHDP dataset is ok, even though there is very little information about its structure in the manuscript (even basic information like number of instances or dimensions seems missing?).",
"The example does not provide much insight into the main ideas and when we would expect the procedure to work more generally."
] | {
"criticism": 7,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 8,
"suggestion_and_solution": 3,
"total": 18
} | 0.388889 | 0.055556 | 0.166667 | 0.611111 | 0.222222 | 0.166667 | 0.444444 | 0.166667 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.827661 | 0.394561 |
Identifying Analogies Across Domains | Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision. | The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis. The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not). The algorithm is tested on three datasets. While the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details. Instead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments. Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level. It is hard to understand sufficiently well what the formalism means without more insight. Also, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIdentifying Analogies Across Domains\nIdentifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis. The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not). The algorithm is tested on three datasets. While the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details. Instead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments. Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level. It is hard to understand sufficiently well what the formalism means without more insight. Also, the experiments need more details. For example, it is not clear what the numbers in Table 2 mean.",
"role": "assistant"
}
]
| [
" The paper presents a method for finding related images (analogies) from different domains based on matching-by-synthesis.",
"The general idea is interesting and the results show improvements over previous approaches, such as CycleGAN (with different initializations, pre-learned or not).",
"The algorithm is tested on three datasets.",
"While the approach has some strong positive points, such as good experiments and theoretical insights (the idea to match by synthesis and the proposed loss which is novel, and combines the proposed concepts), the paper lacks clarity and sufficient details.",
"Instead of the longer intro and related work discussion, I would prefer to see a Figure with the architecture and more illustrative examples to show that the insights are reflected in the experiments.",
"Also, the matching part, which is discussed at the theoretical level, could be better explained and presented at a more visual level.",
"It is hard to understand sufficiently well what the formalism means without more insight.",
"Also, the experiments need more details.",
"For example, it is not clear what the numbers in Table 2 mean."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 9
} | 0.333333 | 0.111111 | 0.333333 | 0.777778 | 0.222222 | 0.444444 | 0.333333 | 0.333333 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.888889 | 2.63637 | 0.252519 |
Hallucinating brains with artificial brains | Human brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view that generative models provide a useful tool for understanding this variability. To this end, this manuscript presents two novel generative models trained on real neuroimaging data which synthesize task-dependent functional brain images. Brain images are high dimensional tensors which exhibit structured spatial correlations. Thus, both models are 3D conditional Generative Adversarial networks (GANs) which apply Convolutional Neural Networks (CNNs) to learn an abstraction of brain image representations. Our results show that the generated brain images are diverse, yet task dependent. In addition to qualitative evaluation, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers (also known as decoding, or brain reading). Our approach achieves significant improvements for a variety of datasets, classifi- cation tasks and evaluation scores. Our classification results provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript. | The work is motivated by a real challenge of neuroimaging analysis: how to increase the amount of data to support the learning of brain decoding. The contribution seems to mix two objectives: on one hand to prove that it is possible to do data augmentation for fMRI brain decoding, on the other hand to design (or better to extend) a new model (to be more precise two models). Concerning the first objective the empirical results do not provide meaningful support that the generative model is really effective. The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn*t pass a significant threshold. This analysis is missing a straw man. It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model. Concerning the contribution of the model, one novelty is the conditional formulation of the discriminator. The design of the empirical evaluation doesn*t address the analysis of the impact of this new formulation. It is not clear whether the supposed improvement is related to the conditional formulation. Figure 3 and Figure 5 illustrate the brain maps generated for Collection 1952 with ICW-GAN and for collection 503 with ACD-GAN. It is not clear how the authors operated the choices of these figures. From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods. The pairwise brain maps would support the interpretation of the generated data. It is worthwhile to remember that the location of brain activations is crucial to detect whether the brain decoding (classification) relies on artifacts or confounds. Minor comments - typos: "a first application or this" => "a first application of this" (p.2) - "qualitative quality" (p.2) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHallucinating brains with artificial brains\nHuman brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view that generative models provide a useful tool for understanding this variability. To this end, this manuscript presents two novel generative models trained on real neuroimaging data which synthesize task-dependent functional brain images. Brain images are high dimensional tensors which exhibit structured spatial correlations. Thus, both models are 3D conditional Generative Adversarial networks (GANs) which apply Convolutional Neural Networks (CNNs) to learn an abstraction of brain image representations. Our results show that the generated brain images are diverse, yet task dependent. In addition to qualitative evaluation, we utilize the generated synthetic brain volumes as additional training data to improve downstream fMRI classifiers (also known as decoding, or brain reading). Our approach achieves significant improvements for a variety of datasets, classifi- cation tasks and evaluation scores. Our classification results provide a quantitative evaluation of the quality of the generated images, and also serve as an additional contribution of this manuscript.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The work is motivated by a real challenge of neuroimaging analysis: how to increase the amount of data to support the learning of brain decoding. The contribution seems to mix two objectives: on one hand to prove that it is possible to do data augmentation for fMRI brain decoding, on the other hand to design (or better to extend) a new model (to be more precise two models). Concerning the first objective the empirical results do not provide meaningful support that the generative model is really effective. The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn*t pass a significant threshold. This analysis is missing a straw man. It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model. Concerning the contribution of the model, one novelty is the conditional formulation of the discriminator. The design of the empirical evaluation doesn*t address the analysis of the impact of this new formulation. It is not clear whether the supposed improvement is related to the conditional formulation. Figure 3 and Figure 5 illustrate the brain maps generated for Collection 1952 with ICW-GAN and for collection 503 with ACD-GAN. It is not clear how the authors operated the choices of these figures. From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods. The pairwise brain maps would support the interpretation of the generated data. It is worthwhile to remember that the location of brain activations is crucial to detect whether the brain decoding (classification) relies on artifacts or confounds. Minor comments - typos: \"a first application or this\" => \"a first application of this\" (p.2) - \"qualitative quality\" (p.2)",
"role": "assistant"
}
]
| [
" The work is motivated by a real challenge of neuroimaging analysis: how to increase the amount of data to support the learning of brain decoding.",
"The contribution seems to mix two objectives: on one hand to prove that it is possible to do data augmentation for fMRI brain decoding, on the other hand to design (or better to extend) a new model (to be more precise two models).",
"Concerning the first objective the empirical results do not provide meaningful support that the generative model is really effective.",
"The improvement is really tiny and a statistical test (not included in the analysis) probably wouldn*t pass a significant threshold.",
"This analysis is missing a straw man.",
"It is not clear whether the difference in the evaluation measures is related to the greater number of examples or by the specific generative model.",
"Concerning the contribution of the model, one novelty is the conditional formulation of the discriminator.",
"The design of the empirical evaluation doesn*t address the analysis of the impact of this new formulation.",
"It is not clear whether the supposed improvement is related to the conditional formulation.",
"Figure 3 and Figure 5 illustrate the brain maps generated for Collection 1952 with ICW-GAN and for collection 503 with ACD-GAN.",
"It is not clear how the authors operated the choices of these figures.",
"From the perspective of neuroscience a reader, would expect to look at the brain maps for the same collection with different methods.",
"The pairwise brain maps would support the interpretation of the generated data.",
"It is worthwhile to remember that the location of brain activations is crucial to detect whether the brain decoding (classification) relies on artifacts or confounds.",
"Minor comments - typos: \"a first application or this\" => \"a first application of this\" (p.2) - \"qualitative quality\" (p.2)"
] | {
"criticism": 6,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 13,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 5,
"suggestion_and_solution": 2,
"total": 15
} | 0.4 | 0.133333 | 0.2 | 0.866667 | 0.066667 | 0.066667 | 0.333333 | 0.133333 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.13687 | 0.06313 |
Autoregressive Convolutional Neural Networks for Asynchronous Time Series | We propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible. | The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications. The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting. This paper is mostly an applications paper. The method itself seems like a fairly simple extension for a particular application, although perhaps the authors have not clearly highlighted details of methodological innovation. I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines. However, as an an applications paper, the bread of experiments are a little bit lacking -- with only that one potentially interesting dataset, which happens to proprietary. Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful. The writing of the paper is a bit loose with comments like: “Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.” Parts of the also appear rush written, with some sentences half finished: “"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.”” As a minor comment, the statement “however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).” Is a bit too broad. It depends where the Gaussianity appears. If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutoregressive Convolutional Neural Networks for Asynchronous Time Series\nWe propose Significance-Offset Convolutional Neural Network, a deep convolutional network architecture for regression of multivariate asynchronous time series. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of adjusted regressors, while the weights are data-dependent functions learnt through a convolutional network. The architecture was designed for applications on asynchronous time series and is evaluated on such datasets: a hedge fund proprietary dataset of over 2 million quotes for a credit derivative index, an artificially generated noisy autoregressive series and household electricity consumption dataset. The pro-posed architecture achieves promising results as compared to convolutional and recurrent neural networks. The code for the numerical experiments and the architecture implementation will be shared online to make the research reproducible.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications. The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting. This paper is mostly an applications paper. The method itself seems like a fairly simple extension for a particular application, although perhaps the authors have not clearly highlighted details of methodological innovation. I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines. However, as an an applications paper, the bread of experiments are a little bit lacking -- with only that one potentially interesting dataset, which happens to proprietary. Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful. The writing of the paper is a bit loose with comments like: “Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.” Parts of the also appear rush written, with some sentences half finished: “\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.”” As a minor comment, the statement “however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).” Is a bit too broad. It depends where the Gaussianity appears. If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables.",
"role": "assistant"
}
]
| [
" The authors propose an extension to CNN using an autoregressive weighting for asynchronous time series applications.",
"The method is applied to a proprietary dataset as well as a couple UCI problems and a synthetic dataset, showing improved performance over baselines in the asynchronous setting.",
"This paper is mostly an applications paper.",
"The method itself seems like a fairly simple extension for a particular application, although perhaps the authors have not clearly highlighted details of methodological innovation.",
"I liked that the method was motivated to solve a real problem, and that it does seem to do so well compared to reasonable baselines.",
"However, as an an applications paper, the bread of experiments are a little bit lacking -- with only that one potentially interesting dataset, which happens to proprietary.",
"Given the fairly empirical nature of the paper in general, it feels like a strong argument should be made, which includes experiments, that this work will be generally significant and impactful.",
"The writing of the paper is a bit loose with comments like: “Besides these and claims of secretive hedge funds (it can be marketing surfing on the deep learning hype), no promising results or innovative architectures were publicly published so far, to the best of our knowledge.” Parts of the also appear rush written, with some sentences half finished: “\"ues of x might be heterogenous, hence On the other hand, significance network provides data-dependent weights for all regressors and sums them up in autoregressive manner.”” As a minor comment, the statement “however, due to assumed Gaussianity they are inappropriate for financial datasets, which often follow fat-tailed distributions (Cont, 2001).” Is a bit too broad.",
"It depends where the Gaussianity appears.",
"If the likelihood is non-Gaussian, then it often doesn’t matter if there are latent Gaussian variables."
] | {
"criticism": 3,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 10
} | 0.3 | 0.1 | 0.4 | 0.7 | 0.2 | 0.1 | 0.3 | 0.1 | 10 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | 2.057958 | 0.142042 |
Statestream: A toolbox to explore layerwise-parallel deep neural networks | Building deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks. | In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies. While the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nStatestream: A toolbox to explore layerwise-parallel deep neural networks\nBuilding deep neural networks to control autonomous agents which have to interact in real-time with the physical world, such as robots or automotive vehicles, requires a seamless integration of time into a network’s architecture. The central question of this work is, how the temporal nature of reality should be reflected in the execution of a deep neural network and its components. Most artificial deep neural networks are partitioned into a directed graph of connected modules or layers and the layers themselves consist of elemental building blocks, such as single units. For most deep neural networks, all units of a layer are processed synchronously and in parallel, but layers themselves are processed in a sequential manner. In contrast, all elements of a biological neural network are processed in parallel. In this paper, we define a class of networks between these two extreme cases. These networks are executed in a streaming or synchronous layerwise-parallel manner, unlocking the layers of such networks for parallel processing. Compared to the standard layerwise-sequential deep networks, these new layerwise-parallel networks show a fundamentally different temporal behavior and flow of information, especially for networks with skip or recurrent connections. We argue that layerwise-parallel deep networks are better suited for future challenges of deep neural network design, such as large functional modularized and/or recurrent architectures as well as networks allocating different network capacities dependent on current stimulus and/or task complexity. We layout basic properties and discuss major challenges for layerwise-parallel networks. Additionally, we provide a toolbox to design, train, evaluate, and online-interact with layerwise-parallel networks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks. They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies. While the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice. In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets.",
"role": "assistant"
}
]
| [
" In this paper, the authors present an open-source toolbox to explore layerwise-parallel deep neural networks.",
"They offer an interesting and detailed comparison of the temporal progression of layerwise-parallel and layerwise-sequential networks, and differences that can emerge in the results of these two computation strategies.",
"While the open-source toolbox introduced in this paper can be an excellent resource for the community interested in exploring these networks, the present submission offers relatively few results actually using these networks in practice.",
"In order to make a more compelling case for these networks, the present submission could include more detailed investigations, perhaps demonstrating that they learn differently or better than other implementations on standard training sets."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 4
} | 0 | 0 | 0.75 | 1 | 0.5 | 0 | 0.5 | 0.25 | 4 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 3 | 1.721623 | 1.278377 |
Smooth Loss Functions for Deep Top-k Classification | The top- error is a common measure of performance in machine learning and computer vision. In practice, top- classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top- classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top- optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{"i}ve algorithm would require operations, where is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of . Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of . Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. | This paper introduces a smooth surrogate loss function for the top-k SVM, for the purpose of plugging the SVM to the deep neural networks. The idea is to replace the order statistics, which is not smooth and has a lot of zero partial derivatives, to the exponential of averages, which is smooth and is a good approximation of the order statistics by a good selection of the "temperature parameter". The paper is well organized and clearly written. The idea deserves a publication. On the other hand, there might be better and more direct solutions to reduce the combinatorial complexity. When the temperature parameter is small enough, both of the original top-k SVM surrogate loss (6) and the smooth loss (9) can be computed precisely by sorting the vector s first, and take a good care of the boundary around s_{[k]}. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSmooth Loss Functions for Deep Top-k Classification\nThe top- error is a common measure of performance in machine learning and computer vision. In practice, top- classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top- classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top- optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require operations, where is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of . Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of . Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a smooth surrogate loss function for the top-k SVM, for the purpose of plugging the SVM to the deep neural networks. The idea is to replace the order statistics, which is not smooth and has a lot of zero partial derivatives, to the exponential of averages, which is smooth and is a good approximation of the order statistics by a good selection of the \"temperature parameter\". The paper is well organized and clearly written. The idea deserves a publication. On the other hand, there might be better and more direct solutions to reduce the combinatorial complexity. When the temperature parameter is small enough, both of the original top-k SVM surrogate loss (6) and the smooth loss (9) can be computed precisely by sorting the vector s first, and take a good care of the boundary around s_{[k]}.",
"role": "assistant"
}
]
| [
" This paper introduces a smooth surrogate loss function for the top-k SVM, for the purpose of plugging the SVM to the deep neural networks.",
"The idea is to replace the order statistics, which is not smooth and has a lot of zero partial derivatives, to the exponential of averages, which is smooth and is a good approximation of the order statistics by a good selection of the \"temperature parameter\".",
"The paper is well organized and clearly written.",
"The idea deserves a publication.",
"On the other hand, there might be better and more direct solutions to reduce the combinatorial complexity.",
"When the temperature parameter is small enough, both of the original top-k SVM surrogate loss (6) and the smooth loss (9) can be computed precisely by sorting the vector s first, and take a good care of the boundary around s_{[k]}."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 0,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 6
} | 0 | 0 | 0 | 0.5 | 0.666667 | 0.333333 | 0.166667 | 0.5 | 6 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 1.393328 | 0.773339 |
Smooth Loss Functions for Deep Top-k Classification | The top- error is a common measure of performance in machine learning and computer vision. In practice, top- classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top- classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top- optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{"i}ve algorithm would require operations, where is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of . Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of . Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. | The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better. The paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models. As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps. I have two main concerns with the presentation. A/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss. This can be done by evaluating elementary symmetric polynomials at well-chosen values. The paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic. The paper also advances that GPU parallelization must be used to be able to efficiently train the network. Those claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful. The paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value. I am not sure why this is of interest - can*t the loss evaluation already be parallelized trivially over examples in a training/testing minibatch? I believe the paper could justify this approach better by providing a bit more insights as to why it is required. For instance: - What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials? - How do those compare with CE and L_{5, 1} with the proposed method? - Are numerical instabilities making this completely unfeasible? This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss. B/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework. Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting. Additional comments: - Figure 2b: this visualization is confusing. This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1. This is not convincing. An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSmooth Loss Functions for Deep Top-k Classification\nThe top- error is a common measure of performance in machine learning and computer vision. In practice, top- classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top- classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top- optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require operations, where is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of . Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of . Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better. The paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models. As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps. I have two main concerns with the presentation. A/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss. This can be done by evaluating elementary symmetric polynomials at well-chosen values. The paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic. The paper also advances that GPU parallelization must be used to be able to efficiently train the network. Those claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful. The paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value. I am not sure why this is of interest - can*t the loss evaluation already be parallelized trivially over examples in a training/testing minibatch? I believe the paper could justify this approach better by providing a bit more insights as to why it is required. For instance: - What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials? - How do those compare with CE and L_{5, 1} with the proposed method? - Are numerical instabilities making this completely unfeasible? This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss. B/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework. Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting. Additional comments: - Figure 2b: this visualization is confusing. This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1. This is not convincing. An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting.",
"role": "assistant"
}
]
| [
" The paper is clear and well written.",
"The proposed approach seems to be of interest and to produce interesting results.",
"As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this paper is a promising contribution to handle those issues better.",
"The paper proposes to use a top-k loss such as what has been explored with SVMs in the past, but with deep models.",
"As the loss is not smooth and has sparse gradients, the paper suggests to use a smoothed version where maximums are replaced by log-sum-exps.",
"I have two main concerns with the presentation.",
"A/ In addition to the main contribution, the paper devotes a significant amount of space to explaining how to compute the smoothed loss.",
"This can be done by evaluating elementary symmetric polynomials at well-chosen values.",
"The paper argues that classical methods for such evaluations (e.g., using the usual recurrence relation or more advanced methods that compensate for numerical errors) are not enough when using single precision floating point arithmetic.",
"The paper also advances that GPU parallelization must be used to be able to efficiently train the network.",
"Those claims are not substantiated, however, and the method proposed by the paper seems to add substantial complexity without really proving that it is useful.",
"The paper proposes a divide-and-conquer approach, where a small amount of parallelization can be achieved within the computation of a single elementary symmetric polynomial value.",
"I am not sure why this is of interest - can*t the loss evaluation already be parallelized trivially over examples in a training/testing minibatch?",
"I believe the paper could justify this approach better by providing a bit more insights as to why it is required.",
"For instance: - What accuracies and train/test times do you get using standard methods for the evaluation of elementary symmetric polynomials?",
"- How do those compare with CE and L_{5, 1} with the proposed method?",
"- Are numerical instabilities making this completely unfeasible?",
"This would be especially interesting to understand if this explodes in practice, or if evaluations are just a slightly inaccurate without much accuracy loss.",
"B/ No mention is made of the object detection problem, although multiple of the motivating examples in Figure 1 consider cases that would fall naturally into the object detection framework.",
"Although top-k classification considers in principle an easier problem (no localization), a discussion, as well as a comparison of top-k classification vs., e.g., discarding localization information out of object detection methods, could be interesting.",
"Additional comments: - Figure 2b: this visualization is confusing.",
"This is presented in the same figure and paragraph as the CIFAR results, but instead uses a single synthetic data point in dimension 5, and k=1.",
"This is not convincing.",
"An actual experiment using full dataset or minibatch gradients on CIFAR and the same k value would be more interesting."
] | {
"criticism": 6,
"example": 2,
"importance_and_relevance": 5,
"materials_and_methods": 17,
"praise": 4,
"presentation_and_reporting": 6,
"results_and_discussion": 7,
"suggestion_and_solution": 6,
"total": 24
} | 0.25 | 0.083333 | 0.208333 | 0.708333 | 0.166667 | 0.25 | 0.291667 | 0.25 | 24 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.208333 | 0.298659 | 1.909674 |
Neural Tree Transducers for Tree to Tree Learning | We introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction. | There may be some interesting ideas here, but I think in many places the mathematical description is very confusing and/or flawed. To give some examples: * Just before section 2.1.1, P(T) = .prod_{p .in Path(T)} ... : it*s not clear at all clear that this defines a valid distribution over trees. There is an implicit order over the paths in Path(T) that is simply not defined (otherwise how for x^p could we decide which symbols x^1 ... x^{p-1} to condition upon?) * "We can write S -> O | v | .epsilon..." with S, O and v defined as sets. This is certainly non-standard notation, more explanation is needed. * "The observation is generated by the sequence of left most production rules". This appears to be related to the idea of left-most derivations in context-free grammars. But no discussion is given, and the writing is again vague/imprecise. * "Although the above grammar is not, in general, context free" - I*m not sure what is being referred to here. Are the authors referring to the underlying grammar, or the lack of independence assumptions in the model? The grammar is clearly context-free; the lack of independence assumptions is a separate issue. * "In a probabilistic context-free grammar (PCFG), all production rules are independent": this is not an accurate statement, it*s not clear what is meant by production rules being independent. More accurate would be to say that the choice of rule is conditionally independent of all other information earlier in the derivation, once the non-terminal being expanded is conditioned upon. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNeural Tree Transducers for Tree to Tree Learning\nWe introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n There may be some interesting ideas here, but I think in many places the mathematical description is very confusing and/or flawed. To give some examples: * Just before section 2.1.1, P(T) = .prod_{p .in Path(T)} ... : it*s not clear at all clear that this defines a valid distribution over trees. There is an implicit order over the paths in Path(T) that is simply not defined (otherwise how for x^p could we decide which symbols x^1 ... x^{p-1} to condition upon?) * \"We can write S -> O | v | .epsilon...\" with S, O and v defined as sets. This is certainly non-standard notation, more explanation is needed. * \"The observation is generated by the sequence of left most production rules\". This appears to be related to the idea of left-most derivations in context-free grammars. But no discussion is given, and the writing is again vague/imprecise. * \"Although the above grammar is not, in general, context free\" - I*m not sure what is being referred to here. Are the authors referring to the underlying grammar, or the lack of independence assumptions in the model? The grammar is clearly context-free; the lack of independence assumptions is a separate issue. * \"In a probabilistic context-free grammar (PCFG), all production rules are independent\": this is not an accurate statement, it*s not clear what is meant by production rules being independent. More accurate would be to say that the choice of rule is conditionally independent of all other information earlier in the derivation, once the non-terminal being expanded is conditioned upon.",
"role": "assistant"
}
]
| [
" There may be some interesting ideas here, but I think in many places the mathematical description is very confusing and/or flawed.",
"To give some examples: * Just before section 2.1.1, P(T) = .prod_{p .in Path(T)} ... : it*s not clear at all clear that this defines a valid distribution over trees.",
"There is an implicit order over the paths in Path(T) that is simply not defined (otherwise how for x^p could we decide which symbols x^1 ... x^{p-1} to condition upon?)",
"* \"We can write S -> O | v | .epsilon...\" with S, O and v defined as sets.",
"This is certainly non-standard notation, more explanation is needed.",
"* \"The observation is generated by the sequence of left most production rules\".",
"This appears to be related to the idea of left-most derivations in context-free grammars.",
"But no discussion is given, and the writing is again vague/imprecise.",
"* \"Although the above grammar is not, in general, context free\" - I*m not sure what is being referred to here.",
"Are the authors referring to the underlying grammar, or the lack of independence assumptions in the model?",
"The grammar is clearly context-free; the lack of independence assumptions is a separate issue.",
"* \"In a probabilistic context-free grammar (PCFG), all production rules are independent\": this is not an accurate statement, it*s not clear what is meant by production rules being independent.",
"More accurate would be to say that the choice of rule is conditionally independent of all other information earlier in the derivation, once the non-terminal being expanded is conditioned upon."
] | {
"criticism": 6,
"example": 5,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 11,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 13
} | 0.461538 | 0.384615 | 0.076923 | 0.230769 | 0.153846 | 0.846154 | 0.076923 | 0.153846 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.384615 | 2.384615 | 0 |
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions | The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given { extit{passage, question}} pair and select one of the given options. The current state of the art model for this task first computes a query-aware representation for the passage and then extit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of extit{elimination} and extit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose extit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the { extit{document, question}} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our extit{elimination-selection} based method with a extit{selection} based method gives us an improvement of 7\% (relative) over the best reported performance on this dataset. | This paper proposes a new reading comprehension model for multi-choice questions and the main motivation is that some options should be eliminated first to infer better passage/question representations. It is a well-written paper, however, I am not very convinced by its motivation, the proposed model and the experimental results. First of all, the improvement is rather limited. It is only 0.4 improvement overall on the RACE dataset; although it outperforms GAR on 7 out of 13 categories; but why is it worse on the other 6 categories? I don’t see any convincing explanations here. Secondly, in terms of the development of reading comprehension models, I don’t see why we need to care about eliminating the irrelevant options. It is hard to generalize to any other RC/QA tasks. If the point is that the options can add useful information to induce better representations for passage/question, there should be some simple baselines in the middle that this paper should compare to. The two baselines SAR and GAR both only induce a representation from paragraph/question, and finally compare to the representation of each option. Maybe a simple baseline is to merge the question and all the options and see if a better document representation can be defined. Some visualizations/motivational examples could be also useful to understand how some options are eliminated and how the document representation has been changed based on that. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions\nThe task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given {\textit{passage, question}} pair and select one of the given options. The current state of the art model for this task first computes a query-aware representation for the passage and then \textit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of \textit{elimination} and \textit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose \textit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the {\textit{document, question}} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our \textit{elimination-selection} based method with a \textit{selection} based method gives us an improvement of 7\\% (relative) over the best reported performance on this dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a new reading comprehension model for multi-choice questions and the main motivation is that some options should be eliminated first to infer better passage/question representations. It is a well-written paper, however, I am not very convinced by its motivation, the proposed model and the experimental results. First of all, the improvement is rather limited. It is only 0.4 improvement overall on the RACE dataset; although it outperforms GAR on 7 out of 13 categories; but why is it worse on the other 6 categories? I don’t see any convincing explanations here. Secondly, in terms of the development of reading comprehension models, I don’t see why we need to care about eliminating the irrelevant options. It is hard to generalize to any other RC/QA tasks. If the point is that the options can add useful information to induce better representations for passage/question, there should be some simple baselines in the middle that this paper should compare to. The two baselines SAR and GAR both only induce a representation from paragraph/question, and finally compare to the representation of each option. Maybe a simple baseline is to merge the question and all the options and see if a better document representation can be defined. Some visualizations/motivational examples could be also useful to understand how some options are eliminated and how the document representation has been changed based on that.",
"role": "assistant"
}
]
| [
" This paper proposes a new reading comprehension model for multi-choice questions and the main motivation is that some options should be eliminated first to infer better passage/question representations.",
"It is a well-written paper, however, I am not very convinced by its motivation, the proposed model and the experimental results.",
"First of all, the improvement is rather limited.",
"It is only 0.4 improvement overall on the RACE dataset; although it outperforms GAR on 7 out of 13 categories; but why is it worse on the other 6 categories?",
"I don’t see any convincing explanations here.",
"Secondly, in terms of the development of reading comprehension models, I don’t see why we need to care about eliminating the irrelevant options.",
"It is hard to generalize to any other RC/QA tasks.",
"If the point is that the options can add useful information to induce better representations for passage/question, there should be some simple baselines in the middle that this paper should compare to.",
"The two baselines SAR and GAR both only induce a representation from paragraph/question, and finally compare to the representation of each option.",
"Maybe a simple baseline is to merge the question and all the options and see if a better document representation can be defined.",
"Some visualizations/motivational examples could be also useful to understand how some options are eliminated and how the document representation has been changed based on that."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 4,
"total": 11
} | 0.454545 | 0 | 0.272727 | 0.636364 | 0.090909 | 0.454545 | 0.272727 | 0.363636 | 11 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.545455 | 2.482325 | 0.06313 |
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions | The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given { extit{passage, question}} pair and select one of the given options. The current state of the art model for this task first computes a query-aware representation for the passage and then extit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of extit{elimination} and extit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose extit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the { extit{document, question}} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our extit{elimination-selection} based method with a extit{selection} based method gives us an improvement of 7\% (relative) over the best reported performance on this dataset. | In this paper, a model is built for reading comprehension with multiple choices. The model consists of three modules: encoder, interaction module and elimination module. The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations. Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored. The elimination module is interesting, but the usefulness of “elimination” is not well justified for two reasons. First, the improvement of the proposed model over the previous state of the art is limited. Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement). In order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e. after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy). Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps. Other issues: 1) Is there any difference to directly use and instead of and to compute ? Even though the authors find the orthogonal vectors, they’re gated summed together very soon. It would be better to show how much “elimination” and “subtraction” effect the final performance, besides the effect of subtraction gate. 2) A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model. 3) in page 5 is not defined. What’s the performance of only using for answer selection or replacing with in score function? 4) It would be better to have the experiments trained with different to show how multi-hop effects the final performance, besides the case study in Figure 3. Minor issues: 1) In Eqn. (4), it would be better to use a vector as the input of softmax. 2) It would be easier for discussion if the authors could assign numbers to every equation. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions\nThe task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given {\textit{passage, question}} pair and select one of the given options. The current state of the art model for this task first computes a query-aware representation for the passage and then \textit{selects} the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of \textit{elimination} and \textit{selection}. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose \textit{ElimiNet}, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the {\textit{document, question}} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our \textit{elimination-selection} based method with a \textit{selection} based method gives us an improvement of 7\\% (relative) over the best reported performance on this dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, a model is built for reading comprehension with multiple choices. The model consists of three modules: encoder, interaction module and elimination module. The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations. Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored. The elimination module is interesting, but the usefulness of “elimination” is not well justified for two reasons. First, the improvement of the proposed model over the previous state of the art is limited. Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement). In order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e. after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy). Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps. Other issues: 1) Is there any difference to directly use and instead of and to compute ? Even though the authors find the orthogonal vectors, they’re gated summed together very soon. It would be better to show how much “elimination” and “subtraction” effect the final performance, besides the effect of subtraction gate. 2) A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model. 3) in page 5 is not defined. What’s the performance of only using for answer selection or replacing with in score function? 4) It would be better to have the experiments trained with different to show how multi-hop effects the final performance, besides the case study in Figure 3. Minor issues: 1) In Eqn. (4), it would be better to use a vector as the input of softmax. 2) It would be easier for discussion if the authors could assign numbers to every equation.",
"role": "assistant"
}
]
| [
" In this paper, a model is built for reading comprehension with multiple choices.",
"The model consists of three modules: encoder, interaction module and elimination module.",
"The major contributions are two folds: firstly, proposing the interesting option elimination problem for multi-step reading comprehension; and secondly, proposing the elimination module where a eliminate gate is used to select different orthogonal factors from the document representations.",
"Intuitively, one answer option can be viewed as eliminated if the document representation vector has its factor along the option vector ignored.",
"The elimination module is interesting, but the usefulness of “elimination” is not well justified for two reasons.",
"First, the improvement of the proposed model over the previous state of the art is limited.",
"Second, the model is built upon GAR until the elimination module, then according to Table 1 it seems to indicate that the elimination module does not help significantly (0.4% improvement).",
"In order to show the usefulness of the elimination module, the model should be exactly built on the GAR with an additional elimination module (i.e.",
"after removing the elimination module, the performance should be similar to GAR but not something significantly worse with a 42.58% accuracy).",
"Then we can explicitly compare the performance between GAR and the GAR w/ elimination module to tell how much the new module helps.",
"Other issues: 1) Is there any difference to directly use and instead of and to compute ?",
"Even though the authors find the orthogonal vectors, they’re gated summed together very soon.",
"It would be better to show how much “elimination” and “subtraction” effect the final performance, besides the effect of subtraction gate.",
"2) A figure showing the model architecture and the corresponding QA process will better help the readers understand the proposed model.",
"3) in page 5 is not defined.",
"What’s the performance of only using for answer selection or replacing with in score function?",
"4) It would be better to have the experiments trained with different to show how multi-hop effects the final performance, besides the case study in Figure 3.",
"Minor issues: 1) In Eqn.",
"(4), it would be better to use a vector as the input of softmax.",
"2) It would be easier for discussion if the authors could assign numbers to every equation."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 14,
"praise": 2,
"presentation_and_reporting": 7,
"results_and_discussion": 6,
"suggestion_and_solution": 9,
"total": 20
} | 0.2 | 0.1 | 0.05 | 0.7 | 0.1 | 0.35 | 0.3 | 0.45 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 1.476661 | 0.773339 |
Global Convergence of Policy Gradient Methods for Linearized Control Problems | Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities. | I find this paper not suitable for ICLR. All the results are more or less direct applications of existing optimization techniques, and not provide fundamental new understandings of the learning REPRESENTATION. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGlobal Convergence of Policy Gradient Methods for Linearized Control Problems\nDirect policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an \"end-to-end\" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n I find this paper not suitable for ICLR. All the results are more or less direct applications of existing optimization techniques, and not provide fundamental new understandings of the learning REPRESENTATION.",
"role": "assistant"
}
]
| [
" I find this paper not suitable for ICLR.",
"All the results are more or less direct applications of existing optimization techniques, and not provide fundamental new understandings of the learning REPRESENTATION."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 1,
"praise": 0,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 2
} | 1 | 0 | 0.5 | 0.5 | 0 | 0 | 0.5 | 0 | 2 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 2.5 | 0.590326 | 1.909674 |
Global Convergence of Policy Gradient Methods for Linearized Control Problems | Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an "end-to-end" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities. | The work investigates convergence guarantees of gradient-type policies for reinforcement learning and continuous control problems, both in deterministic and randomized case, whiling coping with non-convexity of the objective. I found that the paper suffers many shortcomings that must be addressed: 1) The writing and organization is quite cumbersome and should be improved. 2) The authors state in the abstract (and elsewhere): "... showing that (model free) policy gradient methods globally converge to the optimal solution ...". This is misleading and NOT true. The authors show the convergence of the objective but not of the iterates sequence. This should be rephrased elsewhere. 3) An important literature on convergence of descent-type methods for semialgebraic objectives is available but not discussed. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGlobal Convergence of Policy Gradient Methods for Linearized Control Problems\nDirect policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model; 2) they are an \"end-to-end\" approach, directly optimizing the performance metric of interest; 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The work investigates convergence guarantees of gradient-type policies for reinforcement learning and continuous control problems, both in deterministic and randomized case, whiling coping with non-convexity of the objective. I found that the paper suffers many shortcomings that must be addressed: 1) The writing and organization is quite cumbersome and should be improved. 2) The authors state in the abstract (and elsewhere): \"... showing that (model free) policy gradient methods globally converge to the optimal solution ...\". This is misleading and NOT true. The authors show the convergence of the objective but not of the iterates sequence. This should be rephrased elsewhere. 3) An important literature on convergence of descent-type methods for semialgebraic objectives is available but not discussed.",
"role": "assistant"
}
]
| [
" The work investigates convergence guarantees of gradient-type policies for reinforcement learning and continuous control problems, both in deterministic and randomized case, whiling coping with non-convexity of the objective.",
"I found that the paper suffers many shortcomings that must be addressed: 1) The writing and organization is quite cumbersome and should be improved.",
"2) The authors state in the abstract (and elsewhere): \"... showing that (model free) policy gradient methods globally converge to the optimal solution ...\".",
"This is misleading and NOT true.",
"The authors show the convergence of the objective but not of the iterates sequence.",
"This should be rephrased elsewhere.",
"3) An important literature on convergence of descent-type methods for semialgebraic objectives is available but not discussed."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 3,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 2,
"total": 7
} | 0.428571 | 0 | 0.142857 | 0.428571 | 0.142857 | 0.285714 | 0.571429 | 0.285714 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
Regularization for Deep Learning: A Taxonomy | Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a novel, systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We identify the atomic building blocks of existing methods, and decouple the assumptions they enforce from the mathematical tools they rely on. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods. | The main aim of ICLR conference, at least as it is written on its website, is to provide new results on theories, methods and algorithms, supporting further breakthroughs in AI and DL. In this respect the authors of the paper claim that their “systematic approach enables the discovery of new, improved regularization methods by combining the best properties of the existing ones.” However, the authors did not provide any discoveries concerning new approaches to regularisation supporting this claim. Thus, the main contribution of the paper is that the authors made a review and performed classification of available regularisation methods. So, the paper is in fact a survey paper, which is more appropriate for full-scale journals. The work, developed by the authors, is really big. However, I am not sure it will bring a lot of benefits for readers except those who need review for some reports, introductions in PhD thesis, etc. Although the authors mentioned some approaches to combine different regularisations, they did not performed any experiments supporting their ideas. Thus, I think that - the paper is well written in general, - it can be improved (by taking into account several important comments from the Reviewer 2) and served as a review paper in some appropriate journal, - the paper is not suited for ICLR proceedings due to reasons, mentioned above. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRegularization for Deep Learning: A Taxonomy\nRegularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a novel, systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We identify the atomic building blocks of existing methods, and decouple the assumptions they enforce from the mathematical tools they rely on. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The main aim of ICLR conference, at least as it is written on its website, is to provide new results on theories, methods and algorithms, supporting further breakthroughs in AI and DL. In this respect the authors of the paper claim that their “systematic approach enables the discovery of new, improved regularization methods by combining the best properties of the existing ones.” However, the authors did not provide any discoveries concerning new approaches to regularisation supporting this claim. Thus, the main contribution of the paper is that the authors made a review and performed classification of available regularisation methods. So, the paper is in fact a survey paper, which is more appropriate for full-scale journals. The work, developed by the authors, is really big. However, I am not sure it will bring a lot of benefits for readers except those who need review for some reports, introductions in PhD thesis, etc. Although the authors mentioned some approaches to combine different regularisations, they did not performed any experiments supporting their ideas. Thus, I think that - the paper is well written in general, - it can be improved (by taking into account several important comments from the Reviewer 2) and served as a review paper in some appropriate journal, - the paper is not suited for ICLR proceedings due to reasons, mentioned above.",
"role": "assistant"
}
]
| [
" The main aim of ICLR conference, at least as it is written on its website, is to provide new results on theories, methods and algorithms, supporting further breakthroughs in AI and DL.",
"In this respect the authors of the paper claim that their “systematic approach enables the discovery of new, improved regularization methods by combining the best properties of the existing ones.” However, the authors did not provide any discoveries concerning new approaches to regularisation supporting this claim.",
"Thus, the main contribution of the paper is that the authors made a review and performed classification of available regularisation methods.",
"So, the paper is in fact a survey paper, which is more appropriate for full-scale journals.",
"The work, developed by the authors, is really big.",
"However, I am not sure it will bring a lot of benefits for readers except those who need review for some reports, introductions in PhD thesis, etc.",
"Although the authors mentioned some approaches to combine different regularisations, they did not performed any experiments supporting their ideas.",
"Thus, I think that - the paper is well written in general, - it can be improved (by taking into account several important comments from the Reviewer 2) and served as a review paper in some appropriate journal, - the paper is not suited for ICLR proceedings due to reasons, mentioned above."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 8
} | 0.5 | 0 | 0.5 | 0.625 | 0.25 | 0.125 | 0.125 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 1.855439 | 0.394561 |
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning | A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations. | This paper shows that learning sentence representations from a diverse set of tasks (skip-thought objective, MT, constituency parsing, and natural language inference) produces . The main contribution of the paper is to show learning from multiple tasks improves the quality of the learned representations. Experiments on various text classification and sentiment analysis datasets show that the proposed method is competitive with existing approaches. There is an impressive number of experiments presented in the paper, but the results are a bit mixed, and it is not always clear that adding more tasks help. I think this paper addresses an important problem of learning general purpose sentence representations. However, I am unable to draw a definitive conclusion from the paper. From Table 2, the best performing model is not always the one with more tasks. For example, adding a parsing objective can either improve or lower the performance quite significantly. Could it be that datasets such as MRPC, SICK, and STSB require more understanding of syntax? Even if this is the case, why adding this objective hurt performance for other datasets? Importantly, it is also not clear whether the performance improvement comes from having more unlabeled data (even if it is trained with the same training objective) or having multiple training objectives. Another question I have is that if there is any specific reason that language modeling is not included as one of the training objectives to learn sentence representations, given that it seems to be the easiest one to collect training data for. The results for transfer learning and low resource settings are more positive. However, it is not surprising that pretraining parts of the model on a large amount of unlabeled data helps when there is not a lot of labeled examples. Overall, while the main contribution of the paper is that having multiple training objectives help learning better sentence, I am not yet convinced by the experiments that this is indeed the case. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning\nA lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper shows that learning sentence representations from a diverse set of tasks (skip-thought objective, MT, constituency parsing, and natural language inference) produces . The main contribution of the paper is to show learning from multiple tasks improves the quality of the learned representations. Experiments on various text classification and sentiment analysis datasets show that the proposed method is competitive with existing approaches. There is an impressive number of experiments presented in the paper, but the results are a bit mixed, and it is not always clear that adding more tasks help. I think this paper addresses an important problem of learning general purpose sentence representations. However, I am unable to draw a definitive conclusion from the paper. From Table 2, the best performing model is not always the one with more tasks. For example, adding a parsing objective can either improve or lower the performance quite significantly. Could it be that datasets such as MRPC, SICK, and STSB require more understanding of syntax? Even if this is the case, why adding this objective hurt performance for other datasets? Importantly, it is also not clear whether the performance improvement comes from having more unlabeled data (even if it is trained with the same training objective) or having multiple training objectives. Another question I have is that if there is any specific reason that language modeling is not included as one of the training objectives to learn sentence representations, given that it seems to be the easiest one to collect training data for. The results for transfer learning and low resource settings are more positive. However, it is not surprising that pretraining parts of the model on a large amount of unlabeled data helps when there is not a lot of labeled examples. Overall, while the main contribution of the paper is that having multiple training objectives help learning better sentence, I am not yet convinced by the experiments that this is indeed the case.",
"role": "assistant"
}
]
| [
" This paper shows that learning sentence representations from a diverse set of tasks (skip-thought objective, MT, constituency parsing, and natural language inference) produces .",
"The main contribution of the paper is to show learning from multiple tasks improves the quality of the learned representations.",
"Experiments on various text classification and sentiment analysis datasets show that the proposed method is competitive with existing approaches.",
"There is an impressive number of experiments presented in the paper, but the results are a bit mixed, and it is not always clear that adding more tasks help.",
"I think this paper addresses an important problem of learning general purpose sentence representations.",
"However, I am unable to draw a definitive conclusion from the paper.",
"From Table 2, the best performing model is not always the one with more tasks.",
"For example, adding a parsing objective can either improve or lower the performance quite significantly.",
"Could it be that datasets such as MRPC, SICK, and STSB require more understanding of syntax?",
"Even if this is the case, why adding this objective hurt performance for other datasets?",
"Importantly, it is also not clear whether the performance improvement comes from having more unlabeled data (even if it is trained with the same training objective) or having multiple training objectives.",
"Another question I have is that if there is any specific reason that language modeling is not included as one of the training objectives to learn sentence representations, given that it seems to be the easiest one to collect training data for.",
"The results for transfer learning and low resource settings are more positive.",
"However, it is not surprising that pretraining parts of the model on a large amount of unlabeled data helps when there is not a lot of labeled examples.",
"Overall, while the main contribution of the paper is that having multiple training objectives help learning better sentence, I am not yet convinced by the experiments that this is indeed the case."
] | {
"criticism": 5,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 15
} | 0.333333 | 0.133333 | 0.2 | 0.666667 | 0.266667 | 0.2 | 0.2 | 0.133333 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.133333 | 2.070204 | 0.06313 |
Correcting Nuisance Variation using Wasserstein Distance | Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases. In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects). We develop a general framework for adjusting the image embeddings in order to `forget* domain-specific information while preserving relevant biological information. To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset presented, the replicated treatment is the negative control. We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present. | This contribution deal with nuisance factors afflicting biological cell images with a domain adaptation approach: the embedding vectors generated from cell images show spurious correlation. The authors define a Wasserstein Distance Network to find a suitable affine transformation that reduces the nuisance factor. The evaluation on a real dataset yields correct results, this approach is quite general and could be applied to different problems. The contribution of this approach could be better highlighted. The early stopping criteria tend to favor suboptimal solution, indeed relying on the Cramer distance is possible improvement. As a side note, the k-NN MOA is central to for the evaluation of the proposed approach. A possible improvement is to try other means for the embedding instead of the Euclidean one. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCorrecting Nuisance Variation using Wasserstein Distance\nProfiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases. In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects). We develop a general framework for adjusting the image embeddings in order to `forget* domain-specific information while preserving relevant biological information. To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset presented, the replicated treatment is the negative control. We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This contribution deal with nuisance factors afflicting biological cell images with a domain adaptation approach: the embedding vectors generated from cell images show spurious correlation. The authors define a Wasserstein Distance Network to find a suitable affine transformation that reduces the nuisance factor. The evaluation on a real dataset yields correct results, this approach is quite general and could be applied to different problems. The contribution of this approach could be better highlighted. The early stopping criteria tend to favor suboptimal solution, indeed relying on the Cramer distance is possible improvement. As a side note, the k-NN MOA is central to for the evaluation of the proposed approach. A possible improvement is to try other means for the embedding instead of the Euclidean one.",
"role": "assistant"
}
]
| [
" This contribution deal with nuisance factors afflicting biological cell images with a domain adaptation approach: the embedding vectors generated from cell images show spurious correlation.",
"The authors define a Wasserstein Distance Network to find a suitable affine transformation that reduces the nuisance factor.",
"The evaluation on a real dataset yields correct results, this approach is quite general and could be applied to different problems.",
"The contribution of this approach could be better highlighted.",
"The early stopping criteria tend to favor suboptimal solution, indeed relying on the Cramer distance is possible improvement.",
"As a side note, the k-NN MOA is central to for the evaluation of the proposed approach.",
"A possible improvement is to try other means for the embedding instead of the Euclidean one."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 7
} | 0 | 0 | 0.285714 | 0.857143 | 0 | 0.142857 | 0.571429 | 0.571429 | 7 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
SEARNN: Training RNNs with global-local losses | We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. | The paper proposes new RNN training method based on the SEARN learning to search (L2S) algorithm and named as SeaRnn. It proposes a way of overcoming the limitation of local optimization trough the exploitation of the structured losses by L2S. It can consider different classifiers and loss functions, and a sampling strategy for making the optimization problem scalable is proposed. SeaRnn improves the results obtained by MLE training in three different problems, including a large-vocabulary machine translation. In summary, a very nice paper. Quality: SeaRnn is a well rooted and successful application of the L2S strategy to the RNN training that combines at the same time global optimization and scalable complexity. Clarity: The paper is well structured and written, with a nice and well-founded literature review. Originality: the paper presents a new algorithm for training RNN based on the L2S methodology, and it has been proven to be competitive in both toy and real-world problems. Significance: although the application of L2S to RNN training is not new, the contribution to the overcoming the limitations due to error propagation and MLE training of RNN is substantial. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSEARNN: Training RNNs with global-local losses\nWe propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \"learning to search\" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes new RNN training method based on the SEARN learning to search (L2S) algorithm and named as SeaRnn. It proposes a way of overcoming the limitation of local optimization trough the exploitation of the structured losses by L2S. It can consider different classifiers and loss functions, and a sampling strategy for making the optimization problem scalable is proposed. SeaRnn improves the results obtained by MLE training in three different problems, including a large-vocabulary machine translation. In summary, a very nice paper. Quality: SeaRnn is a well rooted and successful application of the L2S strategy to the RNN training that combines at the same time global optimization and scalable complexity. Clarity: The paper is well structured and written, with a nice and well-founded literature review. Originality: the paper presents a new algorithm for training RNN based on the L2S methodology, and it has been proven to be competitive in both toy and real-world problems. Significance: although the application of L2S to RNN training is not new, the contribution to the overcoming the limitations due to error propagation and MLE training of RNN is substantial.",
"role": "assistant"
}
]
| [
" The paper proposes new RNN training method based on the SEARN learning to search (L2S) algorithm and named as SeaRnn.",
"It proposes a way of overcoming the limitation of local optimization trough the exploitation of the structured losses by L2S.",
"It can consider different classifiers and loss functions, and a sampling strategy for making the optimization problem scalable is proposed.",
"SeaRnn improves the results obtained by MLE training in three different problems, including a large-vocabulary machine translation.",
"In summary, a very nice paper.",
"Quality: SeaRnn is a well rooted and successful application of the L2S strategy to the RNN training that combines at the same time global optimization and scalable complexity.",
"Clarity: The paper is well structured and written, with a nice and well-founded literature review.",
"Originality: the paper presents a new algorithm for training RNN based on the L2S methodology, and it has been proven to be competitive in both toy and real-world problems.",
"Significance: although the application of L2S to RNN training is not new, the contribution to the overcoming the limitations due to error propagation and MLE training of RNN is substantial."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 6,
"praise": 4,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 9
} | 0 | 0 | 0.333333 | 0.666667 | 0.444444 | 0.333333 | 0.333333 | 0.111111 | 9 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.222222 | 1.969703 | 0.252519 |
SEARNN: Training RNNs with global-local losses | We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. | This paper extends the concept of global rather than local optimization from the learning to search (L2S) literature to RNNs, specifically in the formation and implementation of SEARNN. Their work takes steps to consider and resolve issues that arise from restricting optimization to only local ground truth choices, which traditionally results in label / transition bias from the teacher forced model. The underlying issue (MLE training of RNNs) is well founded and referenced, their introduction and extension to the L2S techniques that may help resolve the issue are promising, and their experiments, both small and large, show the efficacy of their technique. I am also glad to see the exploration of scaling SEARNN to the IWSLT*14 de-en machine translation dataset. As noted by the authors, it is a dataset that has been tackled by related papers and importantly a well scaled dataset. For SEARNN and related techniques to see widespread adoption, the scaling analysis this paper provides is a fundamental component. This reviewer, whilst not having read all of the appendix in detail, also appreciates the additional insights provided by it, such as including losses that were attempted but did not result in appreciable gains. Overall I believe this is a paper that tackles an important topic area and provides a novel and persuasive potential solution to many of the issues it highlights. (extremely minor typo: "One popular possibility from L2S is go the full reduction route down to binary classification") | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSEARNN: Training RNNs with global-local losses\nWe propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the \"learning to search\" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper extends the concept of global rather than local optimization from the learning to search (L2S) literature to RNNs, specifically in the formation and implementation of SEARNN. Their work takes steps to consider and resolve issues that arise from restricting optimization to only local ground truth choices, which traditionally results in label / transition bias from the teacher forced model. The underlying issue (MLE training of RNNs) is well founded and referenced, their introduction and extension to the L2S techniques that may help resolve the issue are promising, and their experiments, both small and large, show the efficacy of their technique. I am also glad to see the exploration of scaling SEARNN to the IWSLT*14 de-en machine translation dataset. As noted by the authors, it is a dataset that has been tackled by related papers and importantly a well scaled dataset. For SEARNN and related techniques to see widespread adoption, the scaling analysis this paper provides is a fundamental component. This reviewer, whilst not having read all of the appendix in detail, also appreciates the additional insights provided by it, such as including losses that were attempted but did not result in appreciable gains. Overall I believe this is a paper that tackles an important topic area and provides a novel and persuasive potential solution to many of the issues it highlights. (extremely minor typo: \"One popular possibility from L2S is go the full reduction route down to binary classification\")",
"role": "assistant"
}
]
| [
" This paper extends the concept of global rather than local optimization from the learning to search (L2S) literature to RNNs, specifically in the formation and implementation of SEARNN.",
"Their work takes steps to consider and resolve issues that arise from restricting optimization to only local ground truth choices, which traditionally results in label / transition bias from the teacher forced model.",
"The underlying issue (MLE training of RNNs) is well founded and referenced, their introduction and extension to the L2S techniques that may help resolve the issue are promising, and their experiments, both small and large, show the efficacy of their technique.",
"I am also glad to see the exploration of scaling SEARNN to the IWSLT*14 de-en machine translation dataset.",
"As noted by the authors, it is a dataset that has been tackled by related papers and importantly a well scaled dataset.",
"For SEARNN and related techniques to see widespread adoption, the scaling analysis this paper provides is a fundamental component.",
"This reviewer, whilst not having read all of the appendix in detail, also appreciates the additional insights provided by it, such as including losses that were attempted but did not result in appreciable gains.",
"Overall I believe this is a paper that tackles an important topic area and provides a novel and persuasive potential solution to many of the issues it highlights.",
"(extremely minor typo: \"One popular possibility from L2S is go the full reduction route down to binary classification\")"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 9
} | 0.111111 | 0.111111 | 0.444444 | 0.777778 | 0.444444 | 0.111111 | 0.222222 | 0 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.222222 | 1.969703 | 0.252519 |
Automatically Inferring Data Quality for Spatiotemporal Forecasting | Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable. Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results. The problem could be exacerbated in black-box prediction models, such as deep neural networks. In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels. Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures. We evaluate our proposed method on forecasting temperatures in Los Angeles. | Summary of the reviews: Pros: • A novel way to evaluate the quality of heterogeneous data sources. • An interesting way to put the data quality measurement as a regularization term in the objective function. Cons: • Since the data quality is a function of local variation, it is unclear about the advantage of the proposed data quality regularization versus using a simple moving average regularization or local smoothness regularization. • It is unclear about the advantage of using the Multi-layer graph convolutional networks versus two naïve settings for graph construction + simple LSTM, see detailed comments D1 Detailed comments: D1: Compared to the proposed approaches, there are two alternative naïve ways: 1) Instead of construct the similarity graph with Gaussian kernel and associated each vertex with different types of time-series, we can also construct one unified similarity graph that is a weighted combination of different types of data sources and then apply traditional LSTM; 2) During the GCN phases, one can apply type-aware random walk as an extension to the deep walk that can only handle a single source of data. It is unclear about the advantage of using the Multi-layer graph convolutional networks versus these two naïve settings. Either some discussions or empirical comparisons would be a plus. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAutomatically Inferring Data Quality for Spatiotemporal Forecasting\nSpatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable. Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results. The problem could be exacerbated in black-box prediction models, such as deep neural networks. In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels. Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures. We evaluate our proposed method on forecasting temperatures in Los Angeles.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary of the reviews: Pros: • A novel way to evaluate the quality of heterogeneous data sources. • An interesting way to put the data quality measurement as a regularization term in the objective function. Cons: • Since the data quality is a function of local variation, it is unclear about the advantage of the proposed data quality regularization versus using a simple moving average regularization or local smoothness regularization. • It is unclear about the advantage of using the Multi-layer graph convolutional networks versus two naïve settings for graph construction + simple LSTM, see detailed comments D1 Detailed comments: D1: Compared to the proposed approaches, there are two alternative naïve ways: 1) Instead of construct the similarity graph with Gaussian kernel and associated each vertex with different types of time-series, we can also construct one unified similarity graph that is a weighted combination of different types of data sources and then apply traditional LSTM; 2) During the GCN phases, one can apply type-aware random walk as an extension to the deep walk that can only handle a single source of data. It is unclear about the advantage of using the Multi-layer graph convolutional networks versus these two naïve settings. Either some discussions or empirical comparisons would be a plus.",
"role": "assistant"
}
]
| [
" Summary of the reviews: Pros: • A novel way to evaluate the quality of heterogeneous data sources.",
"• An interesting way to put the data quality measurement as a regularization term in the objective function.",
"Cons: • Since the data quality is a function of local variation, it is unclear about the advantage of the proposed data quality regularization versus using a simple moving average regularization or local smoothness regularization.",
"• It is unclear about the advantage of using the Multi-layer graph convolutional networks versus two naïve settings for graph construction + simple LSTM, see detailed comments D1 Detailed comments: D1: Compared to the proposed approaches, there are two alternative naïve ways: 1) Instead of construct the similarity graph with Gaussian kernel and associated each vertex with different types of time-series, we can also construct one unified similarity graph that is a weighted combination of different types of data sources and then apply traditional LSTM; 2) During the GCN phases, one can apply type-aware random walk as an extension to the deep walk that can only handle a single source of data.",
"It is unclear about the advantage of using the Multi-layer graph convolutional networks versus these two naïve settings.",
"Either some discussions or empirical comparisons would be a plus."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 6
} | 0.5 | 0 | 0.333333 | 1 | 0.333333 | 0 | 0.166667 | 0.333333 | 6 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.666667 | 1.893328 | 0.773339 |
Hierarchical Adversarially Learned Inference | We propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model*s inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task. | ****** Please note the adjusted review score after revisions and clarifications of the authors. The paper was improved significantly but still lacks novelty. For context, multi-layer VAEs also were not published unmodified as follow-up papers since the objective is identical. Also, I would suggest the authors study the modified prior with marginal statistics and other means to understand not just *that* their model performs better with the extra degree of freedom but also *how* exactly it does it. The only evaluation is sampling from z1 and z2 for reconstruction which shows that some structure is learned in z2 and the attribute classification task. However, more statistical understanding of the distributions of the extra layers/capacity of the model would be interesting. ****** The authors propose a hierarchical GAN setup, called HALI, where they can learn multiple sets of latent variables. They utilize this in a deep generative model for image generation and manage to generate good-looking images, faithful reconstructions and good inpainting results. At the heart of the technique lies the stacking of GANS and the authors claim to be proposing a novel model here. First, Emily Denton et. al proposed a stacked version of GANs in "Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks", which goes uncited here and should be discussed as it was the first work stacking GANs, even if it did so with layer-wise pretraining. Furthermore, the differences to another very similar work to that of the authors (StackGan by Huan et al) are unclear and not well motivated. And third, the authors fail to cite *Adversarial Message Passing* by Karaletsos 2016, which has first introduced joint training of generative models with structure by hierarchical GANs and generalizes the theory to a particular form of inference for structured models with GANs in the loop. This cannot be called concurrent work as it has been around for a year and has been seen and discussed at length in the community, but the authors fail to acknowledge that their basic idea of a joint generative model and inference procedure is subsumed there. In addition, the authors also do not offer any novel technical insights compared to that paper and actually fall short in positioning their paper in the broader context of approximate inference for generative models. Given these failings, this paper has very little novelty and does not perform accurate attribution of credit to the community. Also, the authors propose particular one-off models and do not generalize this technique to an inference principle that could be reusable. As to its merits, the authors manage to get a particularly simple instance of a *deep gan* working for image generation and show the empirical benefits in terms of image generation tasks. In addition, they test their method on a semi-supervised task and show good performance, but with a lack of details. In conclusion, this paper needs to flesh out its contributions on the empirical side and position its exact contributions accordingly and improve the attribution. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nHierarchical Adversarially Learned Inference\nWe propose a novel hierarchical generative model with a simple Markovian structure and a corresponding inference model. Both the generative and inference model are trained using the adversarial learning paradigm. We demonstrate that the hierarchical structure supports the learning of progressively more abstract representations as well as providing semantically meaningful reconstructions with different levels of fidelity. Furthermore, we show that minimizing the Jensen-Shanon divergence between the generative and inference network is enough to minimize the reconstruction error. The resulting semantically meaningful hierarchical latent structure discovery is exemplified on the CelebA dataset. There, we show that the features learned by our model in an unsupervised way outperform the best handcrafted features. Furthermore, the extracted features remain competitive when compared to several recent deep supervised approaches on an attribute prediction task on CelebA. Finally, we leverage the model*s inference network to achieve state-of-the-art performance on a semi-supervised variant of the MNIST digit classification task.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n ****** Please note the adjusted review score after revisions and clarifications of the authors. The paper was improved significantly but still lacks novelty. For context, multi-layer VAEs also were not published unmodified as follow-up papers since the objective is identical. Also, I would suggest the authors study the modified prior with marginal statistics and other means to understand not just *that* their model performs better with the extra degree of freedom but also *how* exactly it does it. The only evaluation is sampling from z1 and z2 for reconstruction which shows that some structure is learned in z2 and the attribute classification task. However, more statistical understanding of the distributions of the extra layers/capacity of the model would be interesting. ****** The authors propose a hierarchical GAN setup, called HALI, where they can learn multiple sets of latent variables. They utilize this in a deep generative model for image generation and manage to generate good-looking images, faithful reconstructions and good inpainting results. At the heart of the technique lies the stacking of GANS and the authors claim to be proposing a novel model here. First, Emily Denton et. al proposed a stacked version of GANs in \"Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks\", which goes uncited here and should be discussed as it was the first work stacking GANs, even if it did so with layer-wise pretraining. Furthermore, the differences to another very similar work to that of the authors (StackGan by Huan et al) are unclear and not well motivated. And third, the authors fail to cite *Adversarial Message Passing* by Karaletsos 2016, which has first introduced joint training of generative models with structure by hierarchical GANs and generalizes the theory to a particular form of inference for structured models with GANs in the loop. This cannot be called concurrent work as it has been around for a year and has been seen and discussed at length in the community, but the authors fail to acknowledge that their basic idea of a joint generative model and inference procedure is subsumed there. In addition, the authors also do not offer any novel technical insights compared to that paper and actually fall short in positioning their paper in the broader context of approximate inference for generative models. Given these failings, this paper has very little novelty and does not perform accurate attribution of credit to the community. Also, the authors propose particular one-off models and do not generalize this technique to an inference principle that could be reusable. As to its merits, the authors manage to get a particularly simple instance of a *deep gan* working for image generation and show the empirical benefits in terms of image generation tasks. In addition, they test their method on a semi-supervised task and show good performance, but with a lack of details. In conclusion, this paper needs to flesh out its contributions on the empirical side and position its exact contributions accordingly and improve the attribution.",
"role": "assistant"
}
]
| [
" ****** Please note the adjusted review score after revisions and clarifications of the authors.",
"The paper was improved significantly but still lacks novelty.",
"For context, multi-layer VAEs also were not published unmodified as follow-up papers since the objective is identical.",
"Also, I would suggest the authors study the modified prior with marginal statistics and other means to understand not just *that* their model performs better with the extra degree of freedom but also *how* exactly it does it.",
"The only evaluation is sampling from z1 and z2 for reconstruction which shows that some structure is learned in z2 and the attribute classification task.",
"However, more statistical understanding of the distributions of the extra layers/capacity of the model would be interesting.",
"****** The authors propose a hierarchical GAN setup, called HALI, where they can learn multiple sets of latent variables.",
"They utilize this in a deep generative model for image generation and manage to generate good-looking images, faithful reconstructions and good inpainting results.",
"At the heart of the technique lies the stacking of GANS and the authors claim to be proposing a novel model here.",
"First, Emily Denton et.",
"al proposed a stacked version of GANs in \"Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks\", which goes uncited here and should be discussed as it was the first work stacking GANs, even if it did so with layer-wise pretraining.",
"Furthermore, the differences to another very similar work to that of the authors (StackGan by Huan et al) are unclear and not well motivated.",
"And third, the authors fail to cite *Adversarial Message Passing* by Karaletsos 2016, which has first introduced joint training of generative models with structure by hierarchical GANs and generalizes the theory to a particular form of inference for structured models with GANs in the loop.",
"This cannot be called concurrent work as it has been around for a year and has been seen and discussed at length in the community, but the authors fail to acknowledge that their basic idea of a joint generative model and inference procedure is subsumed there.",
"In addition, the authors also do not offer any novel technical insights compared to that paper and actually fall short in positioning their paper in the broader context of approximate inference for generative models.",
"Given these failings, this paper has very little novelty and does not perform accurate attribution of credit to the community.",
"Also, the authors propose particular one-off models and do not generalize this technique to an inference principle that could be reusable.",
"As to its merits, the authors manage to get a particularly simple instance of a *deep gan* working for image generation and show the empirical benefits in terms of image generation tasks.",
"In addition, they test their method on a semi-supervised task and show good performance, but with a lack of details.",
"In conclusion, this paper needs to flesh out its contributions on the empirical side and position its exact contributions accordingly and improve the attribution."
] | {
"criticism": 7,
"example": 0,
"importance_and_relevance": 7,
"materials_and_methods": 13,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 7,
"suggestion_and_solution": 5,
"total": 20
} | 0.35 | 0 | 0.35 | 0.65 | 0.15 | 0.05 | 0.35 | 0.25 | 20 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.15 | 1.376661 | 0.773339 |
cGANs with Projection Discriminator | We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. | The paper proposes a simple modification to conditional GANs, obtaining impressive results on both the quality and diversity of samples on ImageNet dataset. Instead of concatenating the condition vector y to the input image x or hidden layers of the discriminator D as in the literature, the authors propose to project the condition y onto a penultimate feature space V of D (by simply taking an inner product between y and V) . This implementation basically restricts the conditional distribution p(y|x) to be really simple and seems to be posing a good prior leading to great empirical results. + Quality: - Simple method leading to great results on ImageNet! - While the paper admittedly leaves theoretical work for future work, the paper would be much stronger if the authors could perform an ablation study to provide readers with more intuition on why this work. One experiment could be: sticking y to every hidden layer of D before the current projection layer, and removing these y*s increasingly and seeing how performance changes. - Appropriate comparison with existing conditional models: AC-GANs and PPGNs. - Appropriate (extensive) metrics were used (Inception score/accuracy, MS-SSIM, FID) + Clarity: - Should explicitly define p, q, r upfront before Equation 1 (or between Eq1 and Eq2). - PPG should be PPGNs. + Originality: This work proposes a simple method that is original compared existing GANs. + Significance: While the contribution is significant, more experiments providing more intuition into why this projection works so well would make the paper much stronger. Overall, I really enjoy reading this paper and recommend for acceptance! | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\ncGANs with Projection Discriminator\nWe propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a simple modification to conditional GANs, obtaining impressive results on both the quality and diversity of samples on ImageNet dataset. Instead of concatenating the condition vector y to the input image x or hidden layers of the discriminator D as in the literature, the authors propose to project the condition y onto a penultimate feature space V of D (by simply taking an inner product between y and V) . This implementation basically restricts the conditional distribution p(y|x) to be really simple and seems to be posing a good prior leading to great empirical results. + Quality: - Simple method leading to great results on ImageNet! - While the paper admittedly leaves theoretical work for future work, the paper would be much stronger if the authors could perform an ablation study to provide readers with more intuition on why this work. One experiment could be: sticking y to every hidden layer of D before the current projection layer, and removing these y*s increasingly and seeing how performance changes. - Appropriate comparison with existing conditional models: AC-GANs and PPGNs. - Appropriate (extensive) metrics were used (Inception score/accuracy, MS-SSIM, FID) + Clarity: - Should explicitly define p, q, r upfront before Equation 1 (or between Eq1 and Eq2). - PPG should be PPGNs. + Originality: This work proposes a simple method that is original compared existing GANs. + Significance: While the contribution is significant, more experiments providing more intuition into why this projection works so well would make the paper much stronger. Overall, I really enjoy reading this paper and recommend for acceptance!",
"role": "assistant"
}
]
| [
" The paper proposes a simple modification to conditional GANs, obtaining impressive results on both the quality and diversity of samples on ImageNet dataset.",
"Instead of concatenating the condition vector y to the input image x or hidden layers of the discriminator D as in the literature, the authors propose to project the condition y onto a penultimate feature space V of D (by simply taking an inner product between y and V) .",
"This implementation basically restricts the conditional distribution p(y|x) to be really simple and seems to be posing a good prior leading to great empirical results.",
"+ Quality: - Simple method leading to great results on ImageNet!",
"- While the paper admittedly leaves theoretical work for future work, the paper would be much stronger if the authors could perform an ablation study to provide readers with more intuition on why this work.",
"One experiment could be: sticking y to every hidden layer of D before the current projection layer, and removing these y*s increasingly and seeing how performance changes.",
"- Appropriate comparison with existing conditional models: AC-GANs and PPGNs.",
"- Appropriate (extensive) metrics were used (Inception score/accuracy, MS-SSIM, FID) + Clarity: - Should explicitly define p, q, r upfront before Equation 1 (or between Eq1 and Eq2).",
"- PPG should be PPGNs.",
"+ Originality: This work proposes a simple method that is original compared existing GANs.",
"+ Significance: While the contribution is significant, more experiments providing more intuition into why this projection works so well would make the paper much stronger.",
"Overall, I really enjoy reading this paper and recommend for acceptance!"
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 8,
"praise": 5,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 6,
"total": 12
} | 0 | 0 | 0.25 | 0.666667 | 0.416667 | 0.083333 | 0.333333 | 0.5 | 12 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 2.234218 | 0.015782 |
Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio | Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet. | It seems like the authors have carefully thought about this problem, and have come up with some elegant solutions, but I am not sold on whether it*s an appropriate match for this conference, mainly because it*s not clear how many machine learning people will be interested in this approach. There was a time about 2 or 3 years ago when sparse-matrix approaches seemed to have a lot of promise, but I get the impression that a lot of people have moved on. The issue is that it*s hard to construct a scenario where it makes sense from a speed or memory standpoint to do this. The authors seem to have found a way to substantially compress the indexes, but it*s not clear to me that this really ends up solving any practical problem. Towards the end of the paper I see mention of a 38.1% reduction in matrix size. That is way too little to make sense in any practical application, especially when you consider the overhead of decompression. It seems to me that you could easily get a factor of 4 to 8 of compression just by finding a suitable way to encode the floating-point numbers in many fewer bits (since the weight parameters are quite Gaussian-distributed and don*t need to be that accurate). | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nViterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio\nWeight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n It seems like the authors have carefully thought about this problem, and have come up with some elegant solutions, but I am not sold on whether it*s an appropriate match for this conference, mainly because it*s not clear how many machine learning people will be interested in this approach. There was a time about 2 or 3 years ago when sparse-matrix approaches seemed to have a lot of promise, but I get the impression that a lot of people have moved on. The issue is that it*s hard to construct a scenario where it makes sense from a speed or memory standpoint to do this. The authors seem to have found a way to substantially compress the indexes, but it*s not clear to me that this really ends up solving any practical problem. Towards the end of the paper I see mention of a 38.1% reduction in matrix size. That is way too little to make sense in any practical application, especially when you consider the overhead of decompression. It seems to me that you could easily get a factor of 4 to 8 of compression just by finding a suitable way to encode the floating-point numbers in many fewer bits (since the weight parameters are quite Gaussian-distributed and don*t need to be that accurate).",
"role": "assistant"
}
]
| [
" It seems like the authors have carefully thought about this problem, and have come up with some elegant solutions, but I am not sold on whether it*s an appropriate match for this conference, mainly because it*s not clear how many machine learning people will be interested in this approach.",
"There was a time about 2 or 3 years ago when sparse-matrix approaches seemed to have a lot of promise, but I get the impression that a lot of people have moved on.",
"The issue is that it*s hard to construct a scenario where it makes sense from a speed or memory standpoint to do this.",
"The authors seem to have found a way to substantially compress the indexes, but it*s not clear to me that this really ends up solving any practical problem.",
"Towards the end of the paper I see mention of a 38.1% reduction in matrix size.",
"That is way too little to make sense in any practical application, especially when you consider the overhead of decompression.",
"It seems to me that you could easily get a factor of 4 to 8 of compression just by finding a suitable way to encode the floating-point numbers in many fewer bits (since the weight parameters are quite Gaussian-distributed and don*t need to be that accurate)."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 7
} | 0.571429 | 0 | 0.428571 | 0.571429 | 0.142857 | 0.428571 | 0.142857 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.428571 | 1.860404 | 0.568167 |
Compressing Word Embeddings via Deep Compositional Code Learning | Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture. | This paper presents an interesting idea to word embeddings that it combines a few base vectors to generate new word embeddings. It also adopts an interesting multicodebook approach for encoding than binary embeddings. The paper presents the proposed approach to a few NLP problems and have shown that this is able to significant reduce the size, increase compression ratio, and still achieved good accuracy. The experiments are convincing and solid. Overall I am weakly inclined to accept this paper. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCompressing Word Embeddings via Deep Compositional Code Learning\nNatural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents an interesting idea to word embeddings that it combines a few base vectors to generate new word embeddings. It also adopts an interesting multicodebook approach for encoding than binary embeddings. The paper presents the proposed approach to a few NLP problems and have shown that this is able to significant reduce the size, increase compression ratio, and still achieved good accuracy. The experiments are convincing and solid. Overall I am weakly inclined to accept this paper.",
"role": "assistant"
}
]
| [
" This paper presents an interesting idea to word embeddings that it combines a few base vectors to generate new word embeddings.",
"It also adopts an interesting multicodebook approach for encoding than binary embeddings.",
"The paper presents the proposed approach to a few NLP problems and have shown that this is able to significant reduce the size, increase compression ratio, and still achieved good accuracy.",
"The experiments are convincing and solid.",
"Overall I am weakly inclined to accept this paper."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 5
} | 0.2 | 0 | 0.6 | 0.6 | 0.8 | 0.4 | 0 | 0 | 5 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 2.6 | 1.589924 | 1.010076 |
Compressing Word Embeddings via Deep Compositional Code Learning | Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture. | This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks. Overall, the paper is well-written. The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation. I think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer. For example, the model does not guarantee to be able to convert resulting "indices" to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector. As a result, the model couldn*t be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task. For reader*s comprehension, it would like to add results about whole memory consumption of each model as well. Also, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer. There are also many studies, and citing similar approaches may help readers to comprehend overall region of these studies. Furthermore, I would like to see two additional analysis. First, if we trained the proposed model with starting from "zero" (e.g., randomly settling each index value), what results are obtained? Second, What kind of information is distributed in each trained basis vector? Are there any common/different things between bases trained by different tasks? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCompressing Word Embeddings via Deep Compositional Code Learning\nNatural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks. Overall, the paper is well-written. The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation. I think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer. For example, the model does not guarantee to be able to convert resulting \"indices\" to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector. As a result, the model couldn*t be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task. For reader*s comprehension, it would like to add results about whole memory consumption of each model as well. Also, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer. There are also many studies, and citing similar approaches may help readers to comprehend overall region of these studies. Furthermore, I would like to see two additional analysis. First, if we trained the proposed model with starting from \"zero\" (e.g., randomly settling each index value), what results are obtained? Second, What kind of information is distributed in each trained basis vector? Are there any common/different things between bases trained by different tasks?",
"role": "assistant"
}
]
| [
" This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices.",
"The proposed method can reduce more than 90% memory consumption while keeping original model accuracy in both the sentiment analysis task and the machine translation tasks.",
"Overall, the paper is well-written.",
"The motivation is clear, the idea and approaches look suitable and the results clearly follow the motivation.",
"I think it is better to clarify in the paper that the proposed method can reduce only the complexity of the input embedding layer.",
"For example, the model does not guarantee to be able to convert resulting \"indices\" to actual words (i.e., there are multiple words that have completely same indices, such as rows 4 and 6 in Table 5), and also there is no trivial method to restore the original indices from the composite vector.",
"As a result, the model couldn*t be used also as the proxy of the word prediction (softmax) layer, which is another but usually more critical bottleneck of the machine translation task.",
"For reader*s comprehension, it would like to add results about whole memory consumption of each model as well.",
"Also, although this paper is focused on only the input embeddings, authors should refer some recent papers that tackle to reduce the complexity of the softmax layer.",
"There are also many studies, and citing similar approaches may help readers to comprehend overall region of these studies.",
"Furthermore, I would like to see two additional analysis.",
"First, if we trained the proposed model with starting from \"zero\" (e.g., randomly settling each index value), what results are obtained?",
"Second, What kind of information is distributed in each trained basis vector?",
"Are there any common/different things between bases trained by different tasks?"
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 2,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 14
} | 0.071429 | 0.071429 | 0.214286 | 0.785714 | 0.142857 | 0.357143 | 0.285714 | 0.357143 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 2.269932 | 0.015782 |
Intriguing Properties of Adversarial Examples | It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white emph{and} black box attacks compared to previous attempts. | Very intriguing paper and results to say the least. I like the way it is written, and the neat interpretations that the authors give of what is going on (instead of assuming that readers will see the same). There is a well presented story of experiments to follow which gives us insight into the problem. Interesting insight into defensive distillation and the effects of uncertainty in neural networks. Quality/Clarity: well written and was easy for me to read Originality: Brings both new ideas and unexpected experimental results. Significance: Creates more questions than it answers, which imo is a positive as this topic definitely deserves more research. Remarks: - Maybe re-render Figure 3 at a higher resolution? - The caption of Figure 5 doesn*t match the labels in the figure*s legend, and also has a weird wording, making it unclear what (a) and (b) refer to. - In section 4 you say you test your models with FGSM accuracy, but in Figure 7 you report stepll and PGD accuracy, could you also plot the same curves for FGSM? - In Figure 4, I*m not sure I understand the right-tail of the distributions. Does it mean that when Delta_ij is very large, epsilon can be very small and still cause an adversarial pertubation? If so does it mean that overconfidence in the extreme is also bad? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIntriguing Properties of Adversarial Examples\nIt is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white emph{and} black box attacks compared to previous attempts.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Very intriguing paper and results to say the least. I like the way it is written, and the neat interpretations that the authors give of what is going on (instead of assuming that readers will see the same). There is a well presented story of experiments to follow which gives us insight into the problem. Interesting insight into defensive distillation and the effects of uncertainty in neural networks. Quality/Clarity: well written and was easy for me to read Originality: Brings both new ideas and unexpected experimental results. Significance: Creates more questions than it answers, which imo is a positive as this topic definitely deserves more research. Remarks: - Maybe re-render Figure 3 at a higher resolution? - The caption of Figure 5 doesn*t match the labels in the figure*s legend, and also has a weird wording, making it unclear what (a) and (b) refer to. - In section 4 you say you test your models with FGSM accuracy, but in Figure 7 you report stepll and PGD accuracy, could you also plot the same curves for FGSM? - In Figure 4, I*m not sure I understand the right-tail of the distributions. Does it mean that when Delta_ij is very large, epsilon can be very small and still cause an adversarial pertubation? If so does it mean that overconfidence in the extreme is also bad?",
"role": "assistant"
}
]
| [
" Very intriguing paper and results to say the least.",
"I like the way it is written, and the neat interpretations that the authors give of what is going on (instead of assuming that readers will see the same).",
"There is a well presented story of experiments to follow which gives us insight into the problem.",
"Interesting insight into defensive distillation and the effects of uncertainty in neural networks.",
"Quality/Clarity: well written and was easy for me to read Originality: Brings both new ideas and unexpected experimental results.",
"Significance: Creates more questions than it answers, which imo is a positive as this topic definitely deserves more research.",
"Remarks: - Maybe re-render Figure 3 at a higher resolution?",
"- The caption of Figure 5 doesn*t match the labels in the figure*s legend, and also has a weird wording, making it unclear what (a) and (b) refer to.",
"- In section 4 you say you test your models with FGSM accuracy, but in Figure 7 you report stepll and PGD accuracy, could you also plot the same curves for FGSM?",
"- In Figure 4, I*m not sure I understand the right-tail of the distributions.",
"Does it mean that when Delta_ij is very large, epsilon can be very small and still cause an adversarial pertubation?",
"If so does it mean that overconfidence in the extreme is also bad?"
] | {
"criticism": 2,
"example": 4,
"importance_and_relevance": 4,
"materials_and_methods": 3,
"praise": 6,
"presentation_and_reporting": 5,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 12
} | 0.166667 | 0.333333 | 0.333333 | 0.25 | 0.5 | 0.416667 | 0.333333 | 0.25 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.583333 | 2.567551 | 0.015782 |
Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings. | This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer. The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image. The multi-scale representation allows for better performance at early stages of the network. Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers. A thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation. Pros: - The presentation is clear and easy to follow. - The structure of the network is clearly justified in section 4. - The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting. - The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios. - Results are very promising, with 5x speed-ups and same or better accuracy that previous models. - The extensive experimentation shows that the proposed network is better than previous approaches under different regimes. Cons: - Results about the more efficient densenet* could be shown in the main paper Additional Comments: - Why in training you used logistic loss instead of the more common cross-entropy loss? Has this any connection with the final performance of the network? - In fig. 5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT - In fig. 5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results - How the budget in terms of Mul-Adds is actually estimated? I think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them. The experimental evaluation is complete and accurate. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMulti-Scale Dense Networks for Resource Efficient Image Classification\nIn this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer. The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image. The multi-scale representation allows for better performance at early stages of the network. Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers. A thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation. Pros: - The presentation is clear and easy to follow. - The structure of the network is clearly justified in section 4. - The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting. - The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios. - Results are very promising, with 5x speed-ups and same or better accuracy that previous models. - The extensive experimentation shows that the proposed network is better than previous approaches under different regimes. Cons: - Results about the more efficient densenet* could be shown in the main paper Additional Comments: - Why in training you used logistic loss instead of the more common cross-entropy loss? Has this any connection with the final performance of the network? - In fig. 5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT - In fig. 5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results - How the budget in terms of Mul-Adds is actually estimated? I think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them. The experimental evaluation is complete and accurate.",
"role": "assistant"
}
]
| [
" This paper introduces a new model to perform image classification with limited computational resources at test time.",
"The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer.",
"The multiple classifiers allow for a finer selection of the amount of computation needed for a given input image.",
"The multi-scale representation allows for better performance at early stages of the network.",
"Finally the dense connectivity allows to reduce the negative effect that early classifiers have on the feature representation for the following layers.",
"A thorough evaluation on ImageNet and Cifar100 shows that the network can perform better than previous models and ensembles of previous models with a reduced amount of computation.",
"Pros: - The presentation is clear and easy to follow.",
"- The structure of the network is clearly justified in section 4.",
"- The use of dense connectivity to avoid the loss of performance of using early-exit classifier is very interesting.",
"- The evaluation in terms of anytime prediction and budgeted batch classification can represent real case scenarios.",
"- Results are very promising, with 5x speed-ups and same or better accuracy that previous models.",
"- The extensive experimentation shows that the proposed network is better than previous approaches under different regimes.",
"Cons: - Results about the more efficient densenet* could be shown in the main paper Additional Comments: - Why in training you used logistic loss instead of the more common cross-entropy loss?",
"Has this any connection with the final performance of the network?",
"- In fig.",
"5 left for completeness I would like to see also results for DenseNet^MT and ResNet^MT - In fig.",
"5 left I cannot find the 4% and 8% higher accuracy with 0.5x10^10 to 1.0x10^10 FLOPs, as mentioned in section 5.1 anytime prediction results - How the budget in terms of Mul-Adds is actually estimated?",
"I think that this paper present a very powerful approach to speed-up the computational cost of a CNN at test time and clearly explains some of the common trade-offs between speed and accuracy and how to improve them.",
"The experimental evaluation is complete and accurate."
] | {
"criticism": 0,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 14,
"praise": 6,
"presentation_and_reporting": 5,
"results_and_discussion": 7,
"suggestion_and_solution": 4,
"total": 19
} | 0 | 0.105263 | 0.157895 | 0.736842 | 0.315789 | 0.263158 | 0.368421 | 0.210526 | 19 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.157895 | 1.589727 | 0.568167 |
Variational Bi-LSTMs | Recurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs), which model sequences along both forward and backward directions, generally perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a dependence between the two paths (during training, but which may be omitted during inference). Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance. | *Quality* The paper is easy to parse, with clear diagrams and derivations at the start. The problem context is clearly stated, as is the proposed model. The improvements in terms of average log-likelihood are clear. The model does improve over state-of-the-art in some cases, but not all. Based on the presented findings, it is difficult to determine the quality of the learned models overall, since they are only evaluated in terms of average log likelihood. It is also difficult to determine whether the improvements are due to the model change, or some difference in how the models themselves were trained (particularly in the case of Z-Forcing, a closely related technique). I would like to see more exploration of this point, as the section titled “ablation studies” is short and does not sufficiently address the issue of what component of the model is contributing to the observed improvements in average log-likelihood. Hence, I have assigned a score of "4" for the following reasons: the quality of the generated models is unclear; the paper does not clearly distinguish itself from the closely-related Z-Forcing concept (published at NIPS 2017); and the reasons for the improvements shown in average log-likelihood are not explored sufficiently, that is, the ablation studies don*t eliminate key parts of the model that could provide this information. More information on this decision is given in the remainder. *Clarity* A lack of generated samples in the Experimental Results section makes it difficult to evaluate the performance of the models; log-likelihood alone can be an inadequate measure of performance without some care in how it is calculated and interpreted (refer, e.g., to Theis et al. 2016, “A Note on the Evaluation of Generative Models”). There are some typos and organizational issues. For example, VAEs are reintroduced in the Related Works section, only to provide an explanation for an unrelated optimization challenge with the use of RNNs as encoders and decoders. I also find the motivations for the proposed model itself a little unclear. It seems unnatural to introduce a side-channel-cum-regularizer between a sequence moving forward in time and the same sequence moving backwards, through a variational distribution. In the introduction, improved regularization for LSTM models is cited as a primary motivation for introducing and learning two approximate distributions for latent variables between the forward and backward paths of a bi-LSTM. Is there a serious need for new regularization in such models? The need for this particular regularization choice is not particularly clear based on this explanation, nor are the improvements state-of-the-art in all cases. This weakens a possible theoretical contribution of the paper. *Originality* The proposed modification appears to amount to a regularizer for bi-LSTMs which bears close similarity to Z-Forcing (cited in the paper). I recommend a more careful comparison between the two methods. Without such a comparison, they are a little hard to distinguish, and the originality of this paper is hard to evaluate. Both appear to employ the same core idea of regularizing an LSTM using a learned variational distributions. The differences *seem* to be in the small details, and these details appear to provide better performance in terms of average log-likelihood on all tasks compared to Z-Forcing--but, crucially, not compared to other models in all cases. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariational Bi-LSTMs\nRecurrent neural networks like long short-term memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (Bi-LSTMs), which model sequences along both forward and backward directions, generally perform better at such tasks because they capture a richer representation of the data. In the training of Bi-LSTMs, the forward and backward paths are learned independently. We propose a variant of the Bi-LSTM architecture, which we call Variational Bi-LSTM, that creates a dependence between the two paths (during training, but which may be omitted during inference). Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing state-of-the-art performance.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n *Quality* The paper is easy to parse, with clear diagrams and derivations at the start. The problem context is clearly stated, as is the proposed model. The improvements in terms of average log-likelihood are clear. The model does improve over state-of-the-art in some cases, but not all. Based on the presented findings, it is difficult to determine the quality of the learned models overall, since they are only evaluated in terms of average log likelihood. It is also difficult to determine whether the improvements are due to the model change, or some difference in how the models themselves were trained (particularly in the case of Z-Forcing, a closely related technique). I would like to see more exploration of this point, as the section titled “ablation studies” is short and does not sufficiently address the issue of what component of the model is contributing to the observed improvements in average log-likelihood. Hence, I have assigned a score of \"4\" for the following reasons: the quality of the generated models is unclear; the paper does not clearly distinguish itself from the closely-related Z-Forcing concept (published at NIPS 2017); and the reasons for the improvements shown in average log-likelihood are not explored sufficiently, that is, the ablation studies don*t eliminate key parts of the model that could provide this information. More information on this decision is given in the remainder. *Clarity* A lack of generated samples in the Experimental Results section makes it difficult to evaluate the performance of the models; log-likelihood alone can be an inadequate measure of performance without some care in how it is calculated and interpreted (refer, e.g., to Theis et al. 2016, “A Note on the Evaluation of Generative Models”). There are some typos and organizational issues. For example, VAEs are reintroduced in the Related Works section, only to provide an explanation for an unrelated optimization challenge with the use of RNNs as encoders and decoders. I also find the motivations for the proposed model itself a little unclear. It seems unnatural to introduce a side-channel-cum-regularizer between a sequence moving forward in time and the same sequence moving backwards, through a variational distribution. In the introduction, improved regularization for LSTM models is cited as a primary motivation for introducing and learning two approximate distributions for latent variables between the forward and backward paths of a bi-LSTM. Is there a serious need for new regularization in such models? The need for this particular regularization choice is not particularly clear based on this explanation, nor are the improvements state-of-the-art in all cases. This weakens a possible theoretical contribution of the paper. *Originality* The proposed modification appears to amount to a regularizer for bi-LSTMs which bears close similarity to Z-Forcing (cited in the paper). I recommend a more careful comparison between the two methods. Without such a comparison, they are a little hard to distinguish, and the originality of this paper is hard to evaluate. Both appear to employ the same core idea of regularizing an LSTM using a learned variational distributions. The differences *seem* to be in the small details, and these details appear to provide better performance in terms of average log-likelihood on all tasks compared to Z-Forcing--but, crucially, not compared to other models in all cases.",
"role": "assistant"
}
]
| [
" *Quality* The paper is easy to parse, with clear diagrams and derivations at the start.",
"The problem context is clearly stated, as is the proposed model.",
"The improvements in terms of average log-likelihood are clear.",
"The model does improve over state-of-the-art in some cases, but not all.",
"Based on the presented findings, it is difficult to determine the quality of the learned models overall, since they are only evaluated in terms of average log likelihood.",
"It is also difficult to determine whether the improvements are due to the model change, or some difference in how the models themselves were trained (particularly in the case of Z-Forcing, a closely related technique).",
"I would like to see more exploration of this point, as the section titled “ablation studies” is short and does not sufficiently address the issue of what component of the model is contributing to the observed improvements in average log-likelihood.",
"Hence, I have assigned a score of \"4\" for the following reasons: the quality of the generated models is unclear; the paper does not clearly distinguish itself from the closely-related Z-Forcing concept (published at NIPS 2017); and the reasons for the improvements shown in average log-likelihood are not explored sufficiently, that is, the ablation studies don*t eliminate key parts of the model that could provide this information.",
"More information on this decision is given in the remainder.",
"*Clarity* A lack of generated samples in the Experimental Results section makes it difficult to evaluate the performance of the models; log-likelihood alone can be an inadequate measure of performance without some care in how it is calculated and interpreted (refer, e.g., to Theis et al.",
"2016, “A Note on the Evaluation of Generative Models”).",
"There are some typos and organizational issues.",
"For example, VAEs are reintroduced in the Related Works section, only to provide an explanation for an unrelated optimization challenge with the use of RNNs as encoders and decoders.",
"I also find the motivations for the proposed model itself a little unclear.",
"It seems unnatural to introduce a side-channel-cum-regularizer between a sequence moving forward in time and the same sequence moving backwards, through a variational distribution.",
"In the introduction, improved regularization for LSTM models is cited as a primary motivation for introducing and learning two approximate distributions for latent variables between the forward and backward paths of a bi-LSTM.",
"Is there a serious need for new regularization in such models?",
"The need for this particular regularization choice is not particularly clear based on this explanation, nor are the improvements state-of-the-art in all cases.",
"This weakens a possible theoretical contribution of the paper.",
"*Originality* The proposed modification appears to amount to a regularizer for bi-LSTMs which bears close similarity to Z-Forcing (cited in the paper).",
"I recommend a more careful comparison between the two methods.",
"Without such a comparison, they are a little hard to distinguish, and the originality of this paper is hard to evaluate.",
"Both appear to employ the same core idea of regularizing an LSTM using a learned variational distributions.",
"The differences *seem* to be in the small details, and these details appear to provide better performance in terms of average log-likelihood on all tasks compared to Z-Forcing--but, crucially, not compared to other models in all cases."
] | {
"criticism": 11,
"example": 2,
"importance_and_relevance": 1,
"materials_and_methods": 19,
"praise": 2,
"presentation_and_reporting": 6,
"results_and_discussion": 8,
"suggestion_and_solution": 3,
"total": 24
} | 0.458333 | 0.083333 | 0.041667 | 0.791667 | 0.083333 | 0.25 | 0.333333 | 0.125 | 24 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 0.256993 | 1.909674 |
Toward predictive machine learning for active vision | We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates. | This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images. It describes a cognitive architecture for the same, and provide analyses in terms of processing compression and "confirmation biases" in the model. – Active perception, and more specifically recognition through saccades (or viewpoint selection) is an interesting biologically-inspired approach and seems like an intuitive and promising way to improve efficiency. The problem and its potential applications are well motivated. – The perception-driven control formulation is well-detailed and simple to follow. – The achieved compression rates are significant and impressive, though additional demonstration of performance on more challenging datasets would have been more compelling Questions and comments: – While an 85% compression rate is significant, 88% accuracy on MNIST seems poor. A plot demonstrating the tradeoff of accuracy for compression (by varying Href or other parameters) would provide a more complete picture of performance. Knowing baseline performance (without active inference) would help put numbers in perspective by providing a performance bound due to modeling choices. – What does the distribution of number of saccades required per recognition (for a given threshold) look like over the entire dataset, i.e. how many are dead-easy vs difficult? – Steady state assumption: How can this be relaxed to further generalize to non-static scenes? – Figure 3 is low resolution and difficult to read. Post-rebuttal comments: I have revised my score after considering comments from other reviewers and the revised paper. While the revised version contains more experimental details, the paper in its present form lacks comparisons to other gaze selection and saliency models which are required to put results in context. The paper also contains grammatical errors and is somewhat difficult to understand. Finally, while it proposes an interesting formulation of a well-studied problem, more comparisons and analysis are required to validate the approach. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nToward predictive machine learning for active vision\nWe develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images. It describes a cognitive architecture for the same, and provide analyses in terms of processing compression and \"confirmation biases\" in the model. – Active perception, and more specifically recognition through saccades (or viewpoint selection) is an interesting biologically-inspired approach and seems like an intuitive and promising way to improve efficiency. The problem and its potential applications are well motivated. – The perception-driven control formulation is well-detailed and simple to follow. – The achieved compression rates are significant and impressive, though additional demonstration of performance on more challenging datasets would have been more compelling Questions and comments: – While an 85% compression rate is significant, 88% accuracy on MNIST seems poor. A plot demonstrating the tradeoff of accuracy for compression (by varying Href or other parameters) would provide a more complete picture of performance. Knowing baseline performance (without active inference) would help put numbers in perspective by providing a performance bound due to modeling choices. – What does the distribution of number of saccades required per recognition (for a given threshold) look like over the entire dataset, i.e. how many are dead-easy vs difficult? – Steady state assumption: How can this be relaxed to further generalize to non-static scenes? – Figure 3 is low resolution and difficult to read. Post-rebuttal comments: I have revised my score after considering comments from other reviewers and the revised paper. While the revised version contains more experimental details, the paper in its present form lacks comparisons to other gaze selection and saliency models which are required to put results in context. The paper also contains grammatical errors and is somewhat difficult to understand. Finally, while it proposes an interesting formulation of a well-studied problem, more comparisons and analysis are required to validate the approach.",
"role": "assistant"
}
]
| [
" This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images.",
"It describes a cognitive architecture for the same, and provide analyses in terms of processing compression and \"confirmation biases\" in the model.",
"– Active perception, and more specifically recognition through saccades (or viewpoint selection) is an interesting biologically-inspired approach and seems like an intuitive and promising way to improve efficiency.",
"The problem and its potential applications are well motivated.",
"– The perception-driven control formulation is well-detailed and simple to follow.",
"– The achieved compression rates are significant and impressive, though additional demonstration of performance on more challenging datasets would have been more compelling Questions and comments: – While an 85% compression rate is significant, 88% accuracy on MNIST seems poor.",
"A plot demonstrating the tradeoff of accuracy for compression (by varying Href or other parameters) would provide a more complete picture of performance.",
"Knowing baseline performance (without active inference) would help put numbers in perspective by providing a performance bound due to modeling choices.",
"– What does the distribution of number of saccades required per recognition (for a given threshold) look like over the entire dataset, i.e.",
"how many are dead-easy vs difficult?",
"– Steady state assumption: How can this be relaxed to further generalize to non-static scenes?",
"– Figure 3 is low resolution and difficult to read.",
"Post-rebuttal comments: I have revised my score after considering comments from other reviewers and the revised paper.",
"While the revised version contains more experimental details, the paper in its present form lacks comparisons to other gaze selection and saliency models which are required to put results in context.",
"The paper also contains grammatical errors and is somewhat difficult to understand.",
"Finally, while it proposes an interesting formulation of a well-studied problem, more comparisons and analysis are required to validate the approach."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 10,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 16
} | 0.25 | 0.0625 | 0.1875 | 0.625 | 0.3125 | 0.1875 | 0.3125 | 0.25 | 16 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.1875 | 2.045458 | 0.142042 |
Towards Binary-Valued Gates for Robust LSTM Training | Long Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed Gumbel-Softmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface. | The paper argues for pushing the input and forget gate’s output toward 0 or 1, i.e., the LSTM tends to reside in flat region of surface loss, which is likely to generalize well. To achieve that, the sigmoid function in the original LSTM is replaced by a function G that is continuous and differentiable with respect to the parameters (by applying the Gumbel-Softmax trick). As a result, the model is still differentiable while the output gate is approximately binarized. Pros: - The paper is clearly written - The method is new and somehow theoretically guaranteed by the proof of the Proposition 1 - The experiments are clearly explained with detailed configurations - The performance of the method in the model compression task is promising Cons: - The “simple deduction” which states that pushing the gate values toward 0 or 1 correspond to the region of the overall loss surface may need more theoretical analysis - It is confusing whether the output of the gate is sampled based on or computed directly by the function G - The experiments lack many recent baselines on the same dataset (Penn Treebank: Melis et al. (2017) – On the State of the Art of Evaluation in Neural Language Models; WMT: Ashish et.al. (2017) – Attention Is All You Need) - The experiment’s result is only slightly better than the baseline’s - To be more persuasive, the author should include in the baselines other method that can “binerize” the gate values such as the one sharpening the sigmoid function. In short, this work is worth a read. Although the experimental results are not quite persuasive, the method is nice and promising. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nTowards Binary-Valued Gates for Robust LSTM Training\nLong Short-Term Memory (LSTM) is one of the most widely used recurrent structures in sequence modeling. Its goal is to use gates to control the information flow (e.g., whether to skip some information/transformation or not) in the recurrent computations, although its practical implementation based on soft gates only partially achieves this goal and is easy to overfit. In this paper, we propose a new way for LSTM training, which pushes the values of the gates towards 0 or 1. By doing so, we can (1) better control the information flow: the gates are mostly open or closed, instead of in a middle state; and (2) avoid overfitting to certain extent: the gates operate at their flat regions, which is shown to correspond to better generalization ability. However, learning towards discrete values of the gates is generally difficult. To tackle this challenge, we leverage the recently developed Gumbel-Softmax trick from the field of variational methods, and make the model trainable with standard backpropagation. Experimental results on language modeling and machine translation show that (1) the values of the gates generated by our method are more reasonable and intuitively interpretable, and (2) our proposed method generalizes better and achieves better accuracy on test sets in all tasks. Moreover, the learnt models are not sensitive to low-precision approximation and low-rank approximation of the gate parameters due to the flat loss surface.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper argues for pushing the input and forget gate’s output toward 0 or 1, i.e., the LSTM tends to reside in flat region of surface loss, which is likely to generalize well. To achieve that, the sigmoid function in the original LSTM is replaced by a function G that is continuous and differentiable with respect to the parameters (by applying the Gumbel-Softmax trick). As a result, the model is still differentiable while the output gate is approximately binarized. Pros: - The paper is clearly written - The method is new and somehow theoretically guaranteed by the proof of the Proposition 1 - The experiments are clearly explained with detailed configurations - The performance of the method in the model compression task is promising Cons: - The “simple deduction” which states that pushing the gate values toward 0 or 1 correspond to the region of the overall loss surface may need more theoretical analysis - It is confusing whether the output of the gate is sampled based on or computed directly by the function G - The experiments lack many recent baselines on the same dataset (Penn Treebank: Melis et al. (2017) – On the State of the Art of Evaluation in Neural Language Models; WMT: Ashish et.al. (2017) – Attention Is All You Need) - The experiment’s result is only slightly better than the baseline’s - To be more persuasive, the author should include in the baselines other method that can “binerize” the gate values such as the one sharpening the sigmoid function. In short, this work is worth a read. Although the experimental results are not quite persuasive, the method is nice and promising.",
"role": "assistant"
}
]
| [
" The paper argues for pushing the input and forget gate’s output toward 0 or 1, i.e., the LSTM tends to reside in flat region of surface loss, which is likely to generalize well.",
"To achieve that, the sigmoid function in the original LSTM is replaced by a function G that is continuous and differentiable with respect to the parameters (by applying the Gumbel-Softmax trick).",
"As a result, the model is still differentiable while the output gate is approximately binarized.",
"Pros: - The paper is clearly written - The method is new and somehow theoretically guaranteed by the proof of the Proposition 1 - The experiments are clearly explained with detailed configurations - The performance of the method in the model compression task is promising Cons: - The “simple deduction” which states that pushing the gate values toward 0 or 1 correspond to the region of the overall loss surface may need more theoretical analysis - It is confusing whether the output of the gate is sampled based on or computed directly by the function G - The experiments lack many recent baselines on the same dataset (Penn Treebank: Melis et al.",
"(2017) – On the State of the Art of Evaluation in Neural Language Models; WMT: Ashish et.al.",
"(2017) – Attention Is All You Need) - The experiment’s result is only slightly better than the baseline’s - To be more persuasive, the author should include in the baselines other method that can “binerize” the gate values such as the one sharpening the sigmoid function.",
"In short, this work is worth a read.",
"Although the experimental results are not quite persuasive, the method is nice and promising."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 8
} | 0.25 | 0 | 0.125 | 0.75 | 0.125 | 0.125 | 0.5 | 0.375 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 1.855439 | 0.394561 |
One-shot and few-shot learning of word embeddings | Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter. | The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data. The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision. The problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems. However, the demonstrated experimental results do not match the claims which seems a little grand. Overall, the empirical results is unsatisfactory. The authors pick a few example words and provide a detailed analysis. This is useful to understand how the test perplexity varies with #training examples for these individual settings. However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results. Why were these specific words chosen? If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property. Such an analysis would be useful to understand the effectiveness of the overall approach. Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale. The technical contributions are also not novel. Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOne-shot and few-shot learning of word embeddings\nStandard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data. The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision. The problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems. However, the demonstrated experimental results do not match the claims which seems a little grand. Overall, the empirical results is unsatisfactory. The authors pick a few example words and provide a detailed analysis. This is useful to understand how the test perplexity varies with #training examples for these individual settings. However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results. Why were these specific words chosen? If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property. Such an analysis would be useful to understand the effectiveness of the overall approach. Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale. The technical contributions are also not novel. Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing.",
"role": "assistant"
}
]
| [
" The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data.",
"The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision.",
"The problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems.",
"However, the demonstrated experimental results do not match the claims which seems a little grand.",
"Overall, the empirical results is unsatisfactory.",
"The authors pick a few example words and provide a detailed analysis.",
"This is useful to understand how the test perplexity varies with #training examples for these individual settings.",
"However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results.",
"Why were these specific words chosen?",
"If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property.",
"Such an analysis would be useful to understand the effectiveness of the overall approach.",
"Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale.",
"The technical contributions are also not novel.",
"Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 5,
"suggestion_and_solution": 4,
"total": 14
} | 0.357143 | 0 | 0.285714 | 0.5 | 0.214286 | 0.357143 | 0.357143 | 0.285714 | 14 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.357143 | 2.34136 | 0.015782 |
FastNorm: Improving Numerical Stability of Deep Network Training with Efficient Normalization | We propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing an computation with an one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method. | This authors proposed to use an implicit weight normalization approach to replace the explicit weight normalization used in training of neural networks. The authors claimed to obtain efficiency improvement and better numerical stability. This is a short paper that contains five pages. The idea of the proposed implicit weight normalization is to apply the normalization to scaling the input rather than the rows of the matrices. In terms of the overall time complexity, the improvement seems quite limited considering that the normalization is not the bottleneck operations in the training. In addition, it is not very clear how the proposed approach benefits the mini-batch training of the network. In terms of numerical stability, though experimental results were reported, there is no theoretical analysis. The experiments are quite limited. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFastNorm: Improving Numerical Stability of Deep Network Training with Efficient Normalization\nWe propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing an computation with an one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This authors proposed to use an implicit weight normalization approach to replace the explicit weight normalization used in training of neural networks. The authors claimed to obtain efficiency improvement and better numerical stability. This is a short paper that contains five pages. The idea of the proposed implicit weight normalization is to apply the normalization to scaling the input rather than the rows of the matrices. In terms of the overall time complexity, the improvement seems quite limited considering that the normalization is not the bottleneck operations in the training. In addition, it is not very clear how the proposed approach benefits the mini-batch training of the network. In terms of numerical stability, though experimental results were reported, there is no theoretical analysis. The experiments are quite limited.",
"role": "assistant"
}
]
| [
" This authors proposed to use an implicit weight normalization approach to replace the explicit weight normalization used in training of neural networks.",
"The authors claimed to obtain efficiency improvement and better numerical stability.",
"This is a short paper that contains five pages.",
"The idea of the proposed implicit weight normalization is to apply the normalization to scaling the input rather than the rows of the matrices.",
"In terms of the overall time complexity, the improvement seems quite limited considering that the normalization is not the bottleneck operations in the training.",
"In addition, it is not very clear how the proposed approach benefits the mini-batch training of the network.",
"In terms of numerical stability, though experimental results were reported, there is no theoretical analysis.",
"The experiments are quite limited."
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 8
} | 0.5 | 0 | 0.125 | 0.875 | 0.125 | 0.125 | 0.5 | 0.125 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.375 | 1.980439 | 0.394561 |
FastNorm: Improving Numerical Stability of Deep Network Training with Efficient Normalization | We propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing an computation with an one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method. | This paper proposes a computationally fast method to train neural networks with normalized weights. Experiments demonstrate that their method is promising compared to the competitor “NormProp” which explicitly normalizes the weights of neural networks. Pros: (1) The paper is easy to follow. (2) Authors use figures that are easy to understand to explain their core idea, i.e., maintaining a vector which estimates the row norm of weight matrix and implicitly normalizing weights. Cons: (1) If we count the matrix multiplication operation in fc layer along with normalization (in common cases normalization should follow a weighted layer), the whole computation complexity becomes O(mn) rather than O(n+m), so I doubt how fast it could be in the common case. (2) Authors did a MNIST experiment with a 2-fc layer neural network for comparing their FastNorm to NormProp. It is a bit strange that they do not show the difference of speed, but show that FastNorm can outperform NormProp in terms of classification accuracy with a higher learning rate. Since the efficiency is one of the main contributions, I suggest authors add this comparison. (3) The proposed FastNorm improves the stability by observing the standard deviation of validation accuracies in training phase. The authors attribute this to the reduction of accumulated rounding error in training process, which is somewhat against the community’s consensus, i.e., float precision is not that important so we can use float32 or even float16 to train/do inference for neural networks. I’m curious if this phenomenon still holds if authors use float64 in the experiments. Some typos: First line in page 3: “brining” should be “bringing” Overall, I think the current version of the paper is not ready for ICLR conference. Authors need more experiments to show their approach’s effectiveness. For example, batching and convolution as mentioned by authors would be more significant. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nFastNorm: Improving Numerical Stability of Deep Network Training with Efficient Normalization\nWe propose a modification to weight normalization techniques that provides the same convergence benefits but requires fewer computational operations. The proposed method, FastNorm, exploits the low-rank properties of weight updates and infers the norms without explicitly calculating them, replacing an computation with an one for a fully-connected layer. It improves numerical stability and reduces accuracy variance enabling higher learning rate and offering better convergence. We report experimental results that illustrate the advantage of the proposed method.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a computationally fast method to train neural networks with normalized weights. Experiments demonstrate that their method is promising compared to the competitor “NormProp” which explicitly normalizes the weights of neural networks. Pros: (1) The paper is easy to follow. (2) Authors use figures that are easy to understand to explain their core idea, i.e., maintaining a vector which estimates the row norm of weight matrix and implicitly normalizing weights. Cons: (1) If we count the matrix multiplication operation in fc layer along with normalization (in common cases normalization should follow a weighted layer), the whole computation complexity becomes O(mn) rather than O(n+m), so I doubt how fast it could be in the common case. (2) Authors did a MNIST experiment with a 2-fc layer neural network for comparing their FastNorm to NormProp. It is a bit strange that they do not show the difference of speed, but show that FastNorm can outperform NormProp in terms of classification accuracy with a higher learning rate. Since the efficiency is one of the main contributions, I suggest authors add this comparison. (3) The proposed FastNorm improves the stability by observing the standard deviation of validation accuracies in training phase. The authors attribute this to the reduction of accumulated rounding error in training process, which is somewhat against the community’s consensus, i.e., float precision is not that important so we can use float32 or even float16 to train/do inference for neural networks. I’m curious if this phenomenon still holds if authors use float64 in the experiments. Some typos: First line in page 3: “brining” should be “bringing” Overall, I think the current version of the paper is not ready for ICLR conference. Authors need more experiments to show their approach’s effectiveness. For example, batching and convolution as mentioned by authors would be more significant.",
"role": "assistant"
}
]
| [
" This paper proposes a computationally fast method to train neural networks with normalized weights.",
"Experiments demonstrate that their method is promising compared to the competitor “NormProp” which explicitly normalizes the weights of neural networks.",
"Pros: (1) The paper is easy to follow.",
"(2) Authors use figures that are easy to understand to explain their core idea, i.e., maintaining a vector which estimates the row norm of weight matrix and implicitly normalizing weights.",
"Cons: (1) If we count the matrix multiplication operation in fc layer along with normalization (in common cases normalization should follow a weighted layer), the whole computation complexity becomes O(mn) rather than O(n+m), so I doubt how fast it could be in the common case.",
"(2) Authors did a MNIST experiment with a 2-fc layer neural network for comparing their FastNorm to NormProp.",
"It is a bit strange that they do not show the difference of speed, but show that FastNorm can outperform NormProp in terms of classification accuracy with a higher learning rate.",
"Since the efficiency is one of the main contributions, I suggest authors add this comparison.",
"(3) The proposed FastNorm improves the stability by observing the standard deviation of validation accuracies in training phase.",
"The authors attribute this to the reduction of accumulated rounding error in training process, which is somewhat against the community’s consensus, i.e., float precision is not that important so we can use float32 or even float16 to train/do inference for neural networks.",
"I’m curious if this phenomenon still holds if authors use float64 in the experiments.",
"Some typos: First line in page 3: “brining” should be “bringing” Overall, I think the current version of the paper is not ready for ICLR conference.",
"Authors need more experiments to show their approach’s effectiveness.",
"For example, batching and convolution as mentioned by authors would be more significant."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 6,
"suggestion_and_solution": 4,
"total": 14
} | 0.142857 | 0.071429 | 0.071429 | 0.857143 | 0.142857 | 0.285714 | 0.428571 | 0.285714 | 14 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 2.269932 | 0.015782 |
An Online Learning Approach to Generative Adversarial Networks | We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training. | This is an interesting paper, exploring GAN dynamics using ideas from online learning, in particular the pioneering "sparring" follow-the-regularized leader analysis of Freund and Schapire (using what is listed here as Lemma 4). By restricting the discriminator to be a single layer, the maximum player plays over a concave (parameter) space which stabilizes the full sequence of losses so that Lemma 3 can be proved, allowing proof of the dynamics* convergence to a Nash equilibrium. The analysis suggests a practical (heuristic) algorithm incorporating two features which emerge from the theory: L2 regularization and keeping a history of past models. A very simple queue for the latter is shown to do quite competitively in practice. This paper merits acceptance on theoretical merits alone, because the FTRL analysis for convex-concave games is a very robust tool from theory (see also the more recent sequel [Syrgkanis et al. 2016 "Fast convergence of regularized learning in games"]) that is natural to employ to gain insight on the much more brittle GAN case. The practical aspects are also interesting, because the incorporation of added randomness into the mixed generation strategy is an area where theoretical justifications do motivate practical performance gains; these ideas could clearly be developed in future work. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn Online Learning Approach to Generative Adversarial Networks\nWe consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This is an interesting paper, exploring GAN dynamics using ideas from online learning, in particular the pioneering \"sparring\" follow-the-regularized leader analysis of Freund and Schapire (using what is listed here as Lemma 4). By restricting the discriminator to be a single layer, the maximum player plays over a concave (parameter) space which stabilizes the full sequence of losses so that Lemma 3 can be proved, allowing proof of the dynamics* convergence to a Nash equilibrium. The analysis suggests a practical (heuristic) algorithm incorporating two features which emerge from the theory: L2 regularization and keeping a history of past models. A very simple queue for the latter is shown to do quite competitively in practice. This paper merits acceptance on theoretical merits alone, because the FTRL analysis for convex-concave games is a very robust tool from theory (see also the more recent sequel [Syrgkanis et al. 2016 \"Fast convergence of regularized learning in games\"]) that is natural to employ to gain insight on the much more brittle GAN case. The practical aspects are also interesting, because the incorporation of added randomness into the mixed generation strategy is an area where theoretical justifications do motivate practical performance gains; these ideas could clearly be developed in future work.",
"role": "assistant"
}
]
| [
" This is an interesting paper, exploring GAN dynamics using ideas from online learning, in particular the pioneering \"sparring\" follow-the-regularized leader analysis of Freund and Schapire (using what is listed here as Lemma 4).",
"By restricting the discriminator to be a single layer, the maximum player plays over a concave (parameter) space which stabilizes the full sequence of losses so that Lemma 3 can be proved, allowing proof of the dynamics* convergence to a Nash equilibrium.",
"The analysis suggests a practical (heuristic) algorithm incorporating two features which emerge from the theory: L2 regularization and keeping a history of past models.",
"A very simple queue for the latter is shown to do quite competitively in practice.",
"This paper merits acceptance on theoretical merits alone, because the FTRL analysis for convex-concave games is a very robust tool from theory (see also the more recent sequel [Syrgkanis et al.",
"2016 \"Fast convergence of regularized learning in games\"]) that is natural to employ to gain insight on the much more brittle GAN case.",
"The practical aspects are also interesting, because the incorporation of added randomness into the mixed generation strategy is an area where theoretical justifications do motivate practical performance gains; these ideas could clearly be developed in future work."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 5,
"praise": 3,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 7
} | 0 | 0 | 0.571429 | 0.714286 | 0.428571 | 0 | 0.285714 | 0.142857 | 7 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
NOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS | In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution. | The paper proposes to use multiple generators to fix mode collapse issue. The multiple generators are trained to be diverse. Each generator uses the reverse KL loss so that it models a single mode. One disadvantage is that it increases the number of networks (and hence the number of parameters). The paper needs some additional experiments to convincingly demonstrate the usefulness of the proposed method. Experiments on a challenging dataset with large number of classes (e.g. ImageNet as done by AC-GAN paper) would better illustrate the power of the method. AC-GAN paper: Conditional Image Synthesis with Auxiliary Classifier GANs https://arxiv.org/pdf/1610.09585.pdf The paper lacks clarity in some places and could use another round of editing/polishing. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS\nIn this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes to use multiple generators to fix mode collapse issue. The multiple generators are trained to be diverse. Each generator uses the reverse KL loss so that it models a single mode. One disadvantage is that it increases the number of networks (and hence the number of parameters). The paper needs some additional experiments to convincingly demonstrate the usefulness of the proposed method. Experiments on a challenging dataset with large number of classes (e.g. ImageNet as done by AC-GAN paper) would better illustrate the power of the method. AC-GAN paper: Conditional Image Synthesis with Auxiliary Classifier GANs https://arxiv.org/pdf/1610.09585.pdf The paper lacks clarity in some places and could use another round of editing/polishing.",
"role": "assistant"
}
]
| [
" The paper proposes to use multiple generators to fix mode collapse issue.",
"The multiple generators are trained to be diverse.",
"Each generator uses the reverse KL loss so that it models a single mode.",
"One disadvantage is that it increases the number of networks (and hence the number of parameters).",
"The paper needs some additional experiments to convincingly demonstrate the usefulness of the proposed method.",
"Experiments on a challenging dataset with large number of classes (e.g.",
"ImageNet as done by AC-GAN paper) would better illustrate the power of the method.",
"AC-GAN paper: Conditional Image Synthesis with Auxiliary Classifier GANs https://arxiv.org/pdf/1610.09585.pdf The paper lacks clarity in some places and could use another round of editing/polishing."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 8
} | 0.25 | 0 | 0.125 | 0.875 | 0.125 | 0.25 | 0.125 | 0.375 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN. | The authors propose to use synthetic data generated by GANs as a replacement for personally identifiable data in training ML models for privacy-sensitive applications such as medicine. In particular it demonstrates adversarial training of a recurrent generator for an ICU monitoring multidimensional time series, proposes to evaluate such models by the performance (on real data) of supervised classifiers trained on the synthetic data ("TSTR"), and empirically analyzes the privacy implications of training and using such a model. This paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains. But it is somewhat unfocused and does not seem make a clear contribution to any of these. The recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g. http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g. WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation. I also find it difficult to assess whether the proposed model is actually generating reasonable time series. It may be true that "one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data" because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure. However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples. The basic privacy paradigm proposed seems to be: 1. train a GAN using private data 2. generate new synthetic data, assume this data does not leak private information 3. train a supervised classifier on the private data so that the GAN training-sampling loop basically functions as an anonymization procedure. For this to pan out, we*d need to see that the GAN samples are a) useful for a range of supervised tasks, and b) do not leak private information. But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it*s not obvious that the small set of tasks evaluated are representative of all tasks people might care about. The attempts to demonstrate empirically that the GAN does not memorize training data aren*t particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn*t reveal private data doesn*t imply that a determined attacker wouldn*t succeed. In this vein, the experiments with DP-SGD are more interesting, although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (ε = 0.5 and δ ≤ 9.8 × 10−3) were selected or whether they provide a useful level of privacy. That said I agree this is an interesting avenue for future work. Finally it*s worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor). It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large. One could imagine coding missing data in various ways (e.g. asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nReal-valued (Medical) Time Series Generation with Recurrent Conditional GANs\nGenerative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose to use synthetic data generated by GANs as a replacement for personally identifiable data in training ML models for privacy-sensitive applications such as medicine. In particular it demonstrates adversarial training of a recurrent generator for an ICU monitoring multidimensional time series, proposes to evaluate such models by the performance (on real data) of supervised classifiers trained on the synthetic data (\"TSTR\"), and empirically analyzes the privacy implications of training and using such a model. This paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains. But it is somewhat unfocused and does not seem make a clear contribution to any of these. The recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g. http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g. WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation. I also find it difficult to assess whether the proposed model is actually generating reasonable time series. It may be true that \"one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data\" because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure. However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples. The basic privacy paradigm proposed seems to be: 1. train a GAN using private data 2. generate new synthetic data, assume this data does not leak private information 3. train a supervised classifier on the private data so that the GAN training-sampling loop basically functions as an anonymization procedure. For this to pan out, we*d need to see that the GAN samples are a) useful for a range of supervised tasks, and b) do not leak private information. But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it*s not obvious that the small set of tasks evaluated are representative of all tasks people might care about. The attempts to demonstrate empirically that the GAN does not memorize training data aren*t particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn*t reveal private data doesn*t imply that a determined attacker wouldn*t succeed. In this vein, the experiments with DP-SGD are more interesting, although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (ε = 0.5 and δ ≤ 9.8 × 10−3) were selected or whether they provide a useful level of privacy. That said I agree this is an interesting avenue for future work. Finally it*s worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor). It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large. One could imagine coding missing data in various ways (e.g. asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application.",
"role": "assistant"
}
]
| [
" The authors propose to use synthetic data generated by GANs as a replacement for personally identifiable data in training ML models for privacy-sensitive applications such as medicine.",
"In particular it demonstrates adversarial training of a recurrent generator for an ICU monitoring multidimensional time series, proposes to evaluate such models by the performance (on real data) of supervised classifiers trained on the synthetic data (\"TSTR\"), and empirically analyzes the privacy implications of training and using such a model.",
"This paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains.",
"But it is somewhat unfocused and does not seem make a clear contribution to any of these.",
"The recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g.",
"http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g.",
"WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation.",
"I also find it difficult to assess whether the proposed model is actually generating reasonable time series.",
"It may be true that \"one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data\" because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure.",
"However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples.",
"The basic privacy paradigm proposed seems to be: 1. train a GAN using private data 2. generate new synthetic data, assume this data does not leak private information 3. train a supervised classifier on the private data so that the GAN training-sampling loop basically functions as an anonymization procedure.",
"For this to pan out, we*d need to see that the GAN samples are a) useful for a range of supervised tasks, and b) do not leak private information.",
"But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it*s not obvious that the small set of tasks evaluated are representative of all tasks people might care about.",
"The attempts to demonstrate empirically that the GAN does not memorize training data aren*t particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn*t reveal private data doesn*t imply that a determined attacker wouldn*t succeed.",
"In this vein, the experiments with DP-SGD are more interesting, although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (ε = 0.5 and δ ≤ 9.8 × 10−3) were selected or whether they provide a useful level of privacy.",
"That said I agree this is an interesting avenue for future work.",
"Finally it*s worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor).",
"It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large.",
"One could imagine coding missing data in various ways (e.g.",
"asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application."
] | {
"criticism": 8,
"example": 2,
"importance_and_relevance": 4,
"materials_and_methods": 17,
"praise": 3,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 20
} | 0.4 | 0.1 | 0.2 | 0.85 | 0.15 | 0.05 | 0.2 | 0.2 | 20 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.15 | 1.376661 | 0.773339 |
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning | We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming. | This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions. In general, the paper is well-written (although there are a lot of typos and grammatical errors that need to be corrected), and the main ideas are clear. It would have been useful to provide some more details and carry out additional experiments to strengthen the merit of the proposed model. Especially, in Section 4.2, more details about the quality of paraphrasing after training with the multilingual, monolingual, and refined models would be helpful. Which evaluation metrics were used to evaluate the quality? Also, more monolingual experiments could have been conducted with state-of-the-art neural paraphrasing models on WikiQA and Quora datasets (e.g. see https://arxiv.org/pdf/1610.03098.pdf and https://arxiv.org/pdf/1709.05074.pdf). More details with examples should be provided about the variants of AQA along with the oracle model. Especially, step-by-step examples (for all alternative models) from input (original question) to question reformulations to output (answer/candidate answers) would be useful to understand how each module/variation is having an impact towards the best possible answer/ground truth. Although experiments on SearchQA demonstrate good results, I think it would be also interesting to see the results on additional datasets e.g. MS MARCO (Nguyen et al., 2016), which is very similar to the SearchQA dataset, in order to confirm the generalizability of the proposed approach. --------------------------------------------- Thanks for revising the paper, I am happy to update my scores. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAsk the Right Questions: Active Question Reformulation with Reinforcement Learning\nWe frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions. In general, the paper is well-written (although there are a lot of typos and grammatical errors that need to be corrected), and the main ideas are clear. It would have been useful to provide some more details and carry out additional experiments to strengthen the merit of the proposed model. Especially, in Section 4.2, more details about the quality of paraphrasing after training with the multilingual, monolingual, and refined models would be helpful. Which evaluation metrics were used to evaluate the quality? Also, more monolingual experiments could have been conducted with state-of-the-art neural paraphrasing models on WikiQA and Quora datasets (e.g. see https://arxiv.org/pdf/1610.03098.pdf and https://arxiv.org/pdf/1709.05074.pdf). More details with examples should be provided about the variants of AQA along with the oracle model. Especially, step-by-step examples (for all alternative models) from input (original question) to question reformulations to output (answer/candidate answers) would be useful to understand how each module/variation is having an impact towards the best possible answer/ground truth. Although experiments on SearchQA demonstrate good results, I think it would be also interesting to see the results on additional datasets e.g. MS MARCO (Nguyen et al., 2016), which is very similar to the SearchQA dataset, in order to confirm the generalizability of the proposed approach. --------------------------------------------- Thanks for revising the paper, I am happy to update my scores.",
"role": "assistant"
}
]
| [
" This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers.",
"Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original questions.",
"In general, the paper is well-written (although there are a lot of typos and grammatical errors that need to be corrected), and the main ideas are clear.",
"It would have been useful to provide some more details and carry out additional experiments to strengthen the merit of the proposed model.",
"Especially, in Section 4.2, more details about the quality of paraphrasing after training with the multilingual, monolingual, and refined models would be helpful.",
"Which evaluation metrics were used to evaluate the quality?",
"Also, more monolingual experiments could have been conducted with state-of-the-art neural paraphrasing models on WikiQA and Quora datasets (e.g.",
"see https://arxiv.org/pdf/1610.03098.pdf and https://arxiv.org/pdf/1709.05074.pdf).",
"More details with examples should be provided about the variants of AQA along with the oracle model.",
"Especially, step-by-step examples (for all alternative models) from input (original question) to question reformulations to output (answer/candidate answers) would be useful to understand how each module/variation is having an impact towards the best possible answer/ground truth.",
"Although experiments on SearchQA demonstrate good results, I think it would be also interesting to see the results on additional datasets e.g.",
"MS MARCO (Nguyen et al., 2016), which is very similar to the SearchQA dataset, in order to confirm the generalizability of the proposed approach.",
"--------------------------------------------- Thanks for revising the paper, I am happy to update my scores."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 10,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 2,
"suggestion_and_solution": 8,
"total": 13
} | 0.076923 | 0 | 0.076923 | 0.769231 | 0.307692 | 0.307692 | 0.153846 | 0.615385 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.307692 | 2.307692 | 0 |
STRUCTURED ALIGNMENT NETWORKS | Many tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence. When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors. We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures. Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective. We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset. We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures. | Summary: This paper introduces a structured attention mechanisms to compute alignment scores among all possible spans in two given sentences. The span representations are weighted by the spans marginal scores given by the inside-outside algorithm. Experiments on TREC-QA and SNLI show modest improvement over the word-based structured attention baseline (Parikh et al., 2016). Strengths: The idea of using latent syntactic structure, and computing cross-sentence alignment over spans is very interesting. Weaknesses: The paper is 8.5 pages long. The method did not out-perform other very related structured attention methods (86.8, Kim et al., 2017, 86.9, Liu and Lapata, 2017) Aside from the time complexity from the inside-outside algorithm (as mentioned by the authors in conclusion), the comparison among all pairs of spans is O(n^4), which is more expensive. Am I missing something about the algorithm? It would be nice to show, quantitatively, the agreement between the latent trees and gold/supervised syntax. The paper claimed “the model is able to recover tree structures that very closely mimic syntax”, but it’s hard to draw this conclusion from the two examples in Figure 2. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSTRUCTURED ALIGNMENT NETWORKS\nMany tasks in natural language processing involve comparing two sentences to compute some notion of relevance, entailment, or similarity. Typically this comparison is done either at the word level or at the sentence level, with no attempt to leverage the inherent structure of the sentence. When sentence structure is used for comparison, it is obtained during a non-differentiable pre-processing step, leading to propagation of errors. We introduce a model of structured alignments between sentences, showing how to compare two sentences by matching their latent structures. Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective. We evaluate this model on two sentence comparison tasks: the Stanford natural language inference dataset and the TREC-QA dataset. We find that comparing spans results in superior performance to comparing words individually, and that the learned trees are consistent with actual linguistic structures.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: This paper introduces a structured attention mechanisms to compute alignment scores among all possible spans in two given sentences. The span representations are weighted by the spans marginal scores given by the inside-outside algorithm. Experiments on TREC-QA and SNLI show modest improvement over the word-based structured attention baseline (Parikh et al., 2016). Strengths: The idea of using latent syntactic structure, and computing cross-sentence alignment over spans is very interesting. Weaknesses: The paper is 8.5 pages long. The method did not out-perform other very related structured attention methods (86.8, Kim et al., 2017, 86.9, Liu and Lapata, 2017) Aside from the time complexity from the inside-outside algorithm (as mentioned by the authors in conclusion), the comparison among all pairs of spans is O(n^4), which is more expensive. Am I missing something about the algorithm? It would be nice to show, quantitatively, the agreement between the latent trees and gold/supervised syntax. The paper claimed “the model is able to recover tree structures that very closely mimic syntax”, but it’s hard to draw this conclusion from the two examples in Figure 2.",
"role": "assistant"
}
]
| [
" Summary: This paper introduces a structured attention mechanisms to compute alignment scores among all possible spans in two given sentences.",
"The span representations are weighted by the spans marginal scores given by the inside-outside algorithm.",
"Experiments on TREC-QA and SNLI show modest improvement over the word-based structured attention baseline (Parikh et al., 2016).",
"Strengths: The idea of using latent syntactic structure, and computing cross-sentence alignment over spans is very interesting.",
"Weaknesses: The paper is 8.5 pages long.",
"The method did not out-perform other very related structured attention methods (86.8, Kim et al., 2017, 86.9, Liu and Lapata, 2017) Aside from the time complexity from the inside-outside algorithm (as mentioned by the authors in conclusion), the comparison among all pairs of spans is O(n^4), which is more expensive.",
"Am I missing something about the algorithm?",
"It would be nice to show, quantitatively, the agreement between the latent trees and gold/supervised syntax.",
"The paper claimed “the model is able to recover tree structures that very closely mimic syntax”, but it’s hard to draw this conclusion from the two examples in Figure 2."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 6,
"results_and_discussion": 3,
"suggestion_and_solution": 1,
"total": 9
} | 0.222222 | 0.111111 | 0.111111 | 0.666667 | 0.222222 | 0.666667 | 0.333333 | 0.111111 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.444444 | 2.191926 | 0.252519 |
Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time. | In this paper, the authors present a novel formulation for learning the optimal architecture of a neural network in a multi-task learning framework. Using multi-agent reinforcement learning to find a policy — a mapping from the input layer (and task indicator) to the function block that must be used at the next layer, the paper shows improvement over hard-coded architectures with shared layers. The idea is very interesting and the paper is well-written. The extensive review of the existing literature, along with systematically presented qualitative and quantitative results make for a clear and communicative read. I do think that further scrutiny and research would benefit the work since there are instances when the presented approach enjoys some benefits when solving the problems considered. For example, the more general formulation that uses the dispatcher does not appear to work ver well (Fig. 5), and the situation is only improved by using a task specific router. Overall, I think the idea is worthy of wide dissemination and has great potential for furthering the applications of multi-task learning. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRouting Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning\nMulti-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors present a novel formulation for learning the optimal architecture of a neural network in a multi-task learning framework. Using multi-agent reinforcement learning to find a policy — a mapping from the input layer (and task indicator) to the function block that must be used at the next layer, the paper shows improvement over hard-coded architectures with shared layers. The idea is very interesting and the paper is well-written. The extensive review of the existing literature, along with systematically presented qualitative and quantitative results make for a clear and communicative read. I do think that further scrutiny and research would benefit the work since there are instances when the presented approach enjoys some benefits when solving the problems considered. For example, the more general formulation that uses the dispatcher does not appear to work ver well (Fig. 5), and the situation is only improved by using a task specific router. Overall, I think the idea is worthy of wide dissemination and has great potential for furthering the applications of multi-task learning.",
"role": "assistant"
}
]
| [
" In this paper, the authors present a novel formulation for learning the optimal architecture of a neural network in a multi-task learning framework.",
"Using multi-agent reinforcement learning to find a policy — a mapping from the input layer (and task indicator) to the function block that must be used at the next layer, the paper shows improvement over hard-coded architectures with shared layers.",
"The idea is very interesting and the paper is well-written.",
"The extensive review of the existing literature, along with systematically presented qualitative and quantitative results make for a clear and communicative read.",
"I do think that further scrutiny and research would benefit the work since there are instances when the presented approach enjoys some benefits when solving the problems considered.",
"For example, the more general formulation that uses the dispatcher does not appear to work ver well (Fig.",
"5), and the situation is only improved by using a task specific router.",
"Overall, I think the idea is worthy of wide dissemination and has great potential for furthering the applications of multi-task learning."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 3,
"praise": 4,
"presentation_and_reporting": 1,
"results_and_discussion": 2,
"suggestion_and_solution": 1,
"total": 8
} | 0.125 | 0.125 | 0.5 | 0.375 | 0.5 | 0.125 | 0.25 | 0.125 | 8 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time. | Summary: The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular. - page 4: In algorithm 2, line 6, I do not understand the reward computation. It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on .gamma. In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain. This issue should be corrected or explicitly stated. The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage. - Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output. - The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized. - The WPL algorithm is not clear to me o In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to .hat{R}? how? o Is it assumed that the actions are binary? o I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients? - Page 6: o It is not clear why MNist is tested over 200 examples, where there is a much larger test set available o In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions? - Page 7: The results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization. - Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior. Overall: - The paper is highly unclear in its method representation o There is no unified clear notation. The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors. In notation existing, there are occasional notation errors. o The reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated. Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant. o The state representation used is not clear, and if I understand correctly, it includes only the current depth. This is a severely limited state representation, which does not enable to learn actions based on intermediate results o The continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described. o The presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case. Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end Beyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking. However, it is hard to judge these due to lack of clarity. After revision: - Most of the clarity issues were handled well, and the paper now read nicely - It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation). Instead routing depends on the task and depth only. This is still interesting, but is far from reaching context-dependent routing. - The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative. However, since this is a task transfer issue, I believe several data size points should be tested. For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance). - | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRouting Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning\nMulti-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply. This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task. It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular. - page 4: In algorithm 2, line 6, I do not understand the reward computation. It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on .gamma. In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain. This issue should be corrected or explicitly stated. The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again. This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing. Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage. - Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation. It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods. If I understand correctly, the state representation used in the tabular algorithms includes only the current depth. If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output. - The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented. In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized. - The WPL algorithm is not clear to me o In algorithm box 3, what is R_k? I do not see it defined anywhere. Is it related to .hat{R}? how? o Is it assumed that the actions are binary? o I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability. What is the source of a-symmetry between positive and negative gradients? - Page 6: o It is not clear why MNist is tested over 200 examples, where there is a much larger test set available o In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions? - Page 7: The results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer. If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization. - Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks. However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior. Overall: - The paper is highly unclear in its method representation o There is no unified clear notation. The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors. In notation existing, there are occasional notation errors. o The reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated. Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant. o The state representation used is not clear, and if I understand correctly, it includes only the current depth. This is a severely limited state representation, which does not enable to learn actions based on intermediate results o The continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described. o The presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case. Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end Beyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking. However, it is hard to judge these due to lack of clarity. After revision: - Most of the clarity issues were handled well, and the paper now read nicely - It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation). Instead routing depends on the task and depth only. This is still interesting, but is far from reaching context-dependent routing. - The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative. However, since this is a task transfer issue, I believe several data size points should be tested. For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance). -",
"role": "assistant"
}
]
| [
" Summary: The paper suggests to use a modular network with a controller which makes decisions, at each time step, regarding the next nodule to apply.",
"This network is suggested a tool for solving multi-task scenarios, where certain modules may be shared and others may be trained independently for each task.",
"It is proposed to learn the modules with standard back propagation and the controller with reinforcement learning techniques, mostly tabular.",
"- page 4: In algorithm 2, line 6, I do not understand the reward computation.",
"It seems that either a _{k+1} subscript index is missing for the right hand side R, or an exponent of n-k is missing on .gamma.",
"In the current formula, the final reward affects all decisions without a decay based on the distance between action and reward gain.",
"This issue should be corrected or explicitly stated.",
"The ‘collaboration reward’ is not clearly justified: If I understand correctly, It is stated that actions which were chosen often in the past get higher reward when chosen again.",
"This may create a ‘winner takes all’ effect, but it is not clear why this is beneficial for good routing.",
"Specifically, this term is optimized when a single action is always chosen with high probability – but such a single winner does not seem to be the behavior we want to encourage.",
"- Page 5: It is not described clearly (and better: defined formally) what exactly is the state representation.",
"It is said to include the current network output (which is a vector in R^d), the task label and the depth, but it is not stated how this information is condensed into a single integer index for the tabular methods.",
"If I understand correctly, the state representation used in the tabular algorithms includes only the current depth.",
"If this is true, this constitutes a highly restricted controller, making decisions only based on depth without considering the current output.",
"- The functional approximation versions are even less clear: Again it is not clear what information is contained in the state and how it is represented.",
"In addition it is not clear in this case what network architecture is used for computation of the policy (PG) or valkue (Q-learning), and how exactly they are optimized.",
"- The WPL algorithm is not clear to me o In algorithm box 3, what is R_k?",
"I do not see it defined anywhere.",
"Is it related to .hat{R}?",
"how?",
"o Is it assumed that the actions are binary?",
"o I do not understand why positive gradients are multiplied with the action probability and negative gradients with 1 minus this probability.",
"What is the source of a-symmetry between positive and negative gradients?",
"- Page 6: o It is not clear why MNist is tested over 200 examples, where there is a much larger test set available o In MIN-MTL I do not understand the motivation from creating superclasses composed of 5 random classes each: why do we need such arbitrary and un-natural class definitions?",
"- Page 7: The results on Cifar-100 are compared to several baselines, but not to the standard non-MTL solution: Solve the multi-class classification problem using a softmax loss and a unified, non routing architecture in which all the layers are shared by all classes, with the only distinction in the last classification layer.",
"If the routing solution does not beat this standard baseline, there is no justification for its more complex structure and optimization.",
"- Page 8: The author report that when training the controller with single agent methods the policy collapses into choosing a single module for most tasks.",
"However, this is not-surprising, given that the action-based reward (whos strength is unclear) seems to promote such winner-takes-all behavior.",
"Overall: - The paper is highly unclear in its method representation o There is no unified clear notation.",
"The essential symbols (states, actions, rewards) are not formally defined, and often it is not clear even if they are integers, scalars, or vectors.",
"In notation existing, there are occasional notation errors.",
"o The reward is a) not clear, and b) not well motivated when it is explained, and c) not explicitly stated anywhere: it is said that the action-specific reward may be up to 10 times larger than the final reward, but the actual tradeoff parameter between them is not stated.",
"Note that this parameter is important, as using a 10-times larger action-related reward means that the classification-related reward becomes insignificant.",
"o The state representation used is not clear, and if I understand correctly, it includes only the current depth.",
"This is a severely limited state representation, which does not enable to learn actions based on intermediate results o The continuous versions of the RL algorithms are not explained at all: no state representation, nor optimization is described.",
"o The presentation suffers from severe over-generalization and lack of clarity, which disabled my ability to understand the network and algorithms for a specific case.",
"Instead, I would recommend that in future versions of this document a single network, with a specific router and set of decisions, and with a single algorithm, will be explained with clear notation end-to-end Beyond the clarity issues, I suspect also that the novelty is minor (if the state does not include any information about the current output) and that the empirical baseline is lacking.",
"However, it is hard to judge these due to lack of clarity.",
"After revision: - Most of the clarity issues were handled well, and the paper now read nicely - It is now clear that routing is not done based on the current input (an example is not dynamically routed based on its current representation).",
"Instead routing depends on the task and depth only.",
"This is still interesting, but is far from reaching context-dependent routing.",
"- The results presented are nice and show that task-dependent routing may be better than plain baseline or the stiching alternative.",
"However, since this is a task transfer issue, I believe several data size points should be tested.",
"For example, as data size rises, the task-specific-all-fc alternative is expected to get stronger (as with more data, related task are less required for good performance).",
"-"
] | {
"criticism": 26,
"example": 7,
"importance_and_relevance": 1,
"materials_and_methods": 25,
"praise": 3,
"presentation_and_reporting": 18,
"results_and_discussion": 9,
"suggestion_and_solution": 7,
"total": 45
} | 0.577778 | 0.155556 | 0.022222 | 0.555556 | 0.066667 | 0.4 | 0.2 | 0.155556 | 45 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.133333 | -14.027875 | 16.161208 |
Deep Net Triage: Assessing The Criticality of Network Layers by Structural Compression | Deep network compression seeks to reduce the number of parameters in the network while maintaining a certain level of performance. Deep network distillation seeks to train a smaller network that matches soft-max performance of a larger network. While both regimes have led to impressive performance for their respective goals, neither provide insight into the importance of a given layer in the original model, which is useful if we are to improve our understanding of these highly parameterized models. In this paper, we present the concept of deep net triage, which individually assesses small blocks of convolution layers to understand their collective contribution to the overall performance, which we call emph{criticality}. We call it triage because we assess this criticality by answering the question: what is the impact to the health of the overall network if we compress a block of layers into a single layer. We propose a suite of triage methods and compare them on problem spaces of varying complexity. We ultimately show that, across these problem spaces, deep net triage is able to indicate the of relative importance of different layers. Surprisingly, our local structural compression technique also leads to an improvement in overall accuracy when the final model is fine-tuned globally. | This paper proposes a method to compress a block of layers in a NN (particularly CNNs) evaluates several different sub-approaches. It is easy to follow the paper. Given experiments are nicely setup with intuitive explanations. My main concern about the paper is novelty/importance and lack of experimental comparison to other techniques attacking similar complexity problems. Authors provide a good intro for the problem and existing common approaches addressing network complexity. They locate themselves well in the literature. They provide a simple compact explanation of what their methods are in section 2.3. I find the coding/notation of the sub-methods a little encrypted though. Simple English or better encoding could be used for better readability. Experimental results are explained pretty well. I think the given results/messages are valuable to the community. However the generalizability to different models/network architectures is in question as the paper is mostly based on VGG16. Also the methods/significance seem to be more like an experimental paper and not significant enough for ICLR. I am borderline about this. Comparison between the proposed methods is nicely done but comparison to other literature pruning/compressing models is not experimentally shown. Ideally we expect a reader designing a new architecture to have an idea for which way to go for model compression after reading this paper—results are good to decide between CLRW –STN, but not informative on how they compare to literature. I think this is a significant missing point. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Net Triage: Assessing The Criticality of Network Layers by Structural Compression\nDeep network compression seeks to reduce the number of parameters in the network while maintaining a certain level of performance. Deep network distillation seeks to train a smaller network that matches soft-max performance of a larger network. While both regimes have led to impressive performance for their respective goals, neither provide insight into the importance of a given layer in the original model, which is useful if we are to improve our understanding of these highly parameterized models. In this paper, we present the concept of deep net triage, which individually assesses small blocks of convolution layers to understand their collective contribution to the overall performance, which we call emph{criticality}. We call it triage because we assess this criticality by answering the question: what is the impact to the health of the overall network if we compress a block of layers into a single layer. We propose a suite of triage methods and compare them on problem spaces of varying complexity. We ultimately show that, across these problem spaces, deep net triage is able to indicate the of relative importance of different layers. Surprisingly, our local structural compression technique also leads to an improvement in overall accuracy when the final model is fine-tuned globally.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes a method to compress a block of layers in a NN (particularly CNNs) evaluates several different sub-approaches. It is easy to follow the paper. Given experiments are nicely setup with intuitive explanations. My main concern about the paper is novelty/importance and lack of experimental comparison to other techniques attacking similar complexity problems. Authors provide a good intro for the problem and existing common approaches addressing network complexity. They locate themselves well in the literature. They provide a simple compact explanation of what their methods are in section 2.3. I find the coding/notation of the sub-methods a little encrypted though. Simple English or better encoding could be used for better readability. Experimental results are explained pretty well. I think the given results/messages are valuable to the community. However the generalizability to different models/network architectures is in question as the paper is mostly based on VGG16. Also the methods/significance seem to be more like an experimental paper and not significant enough for ICLR. I am borderline about this. Comparison between the proposed methods is nicely done but comparison to other literature pruning/compressing models is not experimentally shown. Ideally we expect a reader designing a new architecture to have an idea for which way to go for model compression after reading this paper—results are good to decide between CLRW –STN, but not informative on how they compare to literature. I think this is a significant missing point.",
"role": "assistant"
}
]
| [
" This paper proposes a method to compress a block of layers in a NN (particularly CNNs) evaluates several different sub-approaches.",
"It is easy to follow the paper.",
"Given experiments are nicely setup with intuitive explanations.",
"My main concern about the paper is novelty/importance and lack of experimental comparison to other techniques attacking similar complexity problems.",
"Authors provide a good intro for the problem and existing common approaches addressing network complexity.",
"They locate themselves well in the literature.",
"They provide a simple compact explanation of what their methods are in section 2.3.",
"I find the coding/notation of the sub-methods a little encrypted though.",
"Simple English or better encoding could be used for better readability.",
"Experimental results are explained pretty well.",
"I think the given results/messages are valuable to the community.",
"However the generalizability to different models/network architectures is in question as the paper is mostly based on VGG16.",
"Also the methods/significance seem to be more like an experimental paper and not significant enough for ICLR.",
"I am borderline about this.",
"Comparison between the proposed methods is nicely done but comparison to other literature pruning/compressing models is not experimentally shown.",
"Ideally we expect a reader designing a new architecture to have an idea for which way to go for model compression after reading this paper—results are good to decide between CLRW –STN, but not informative on how they compare to literature.",
"I think this is a significant missing point."
] | {
"criticism": 8,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 11,
"praise": 7,
"presentation_and_reporting": 7,
"results_and_discussion": 5,
"suggestion_and_solution": 2,
"total": 17
} | 0.470588 | 0 | 0.235294 | 0.647059 | 0.411765 | 0.411765 | 0.294118 | 0.117647 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.588235 | 2.335716 | 0.252519 |
Deep Net Triage: Assessing The Criticality of Network Layers by Structural Compression | Deep network compression seeks to reduce the number of parameters in the network while maintaining a certain level of performance. Deep network distillation seeks to train a smaller network that matches soft-max performance of a larger network. While both regimes have led to impressive performance for their respective goals, neither provide insight into the importance of a given layer in the original model, which is useful if we are to improve our understanding of these highly parameterized models. In this paper, we present the concept of deep net triage, which individually assesses small blocks of convolution layers to understand their collective contribution to the overall performance, which we call emph{criticality}. We call it triage because we assess this criticality by answering the question: what is the impact to the health of the overall network if we compress a block of layers into a single layer. We propose a suite of triage methods and compare them on problem spaces of varying complexity. We ultimately show that, across these problem spaces, deep net triage is able to indicate the of relative importance of different layers. Surprisingly, our local structural compression technique also leads to an improvement in overall accuracy when the final model is fine-tuned globally. | Summary: The paper presents analytical results comparing various initialization and training methods of transferring knowledge from VGG network to a smaller student network by replacing blocks of layers with single layers. Review: The paper is well written and explains the experiments well, however the results are somehow obvious. The method also lacks novelty and looks like a generalized version of fitnets. The insights into initialization schemes are valuable, however I don*t think this is enough for conference track. Comments: Intro mentions that shortcut connections overcome overfitting whereas they are important for preserving signal flow and enabling training of deeper architectures. Page4: fine-tine->fine-tune Details of the experimental setup are missing e.g. optimizer. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Net Triage: Assessing The Criticality of Network Layers by Structural Compression\nDeep network compression seeks to reduce the number of parameters in the network while maintaining a certain level of performance. Deep network distillation seeks to train a smaller network that matches soft-max performance of a larger network. While both regimes have led to impressive performance for their respective goals, neither provide insight into the importance of a given layer in the original model, which is useful if we are to improve our understanding of these highly parameterized models. In this paper, we present the concept of deep net triage, which individually assesses small blocks of convolution layers to understand their collective contribution to the overall performance, which we call emph{criticality}. We call it triage because we assess this criticality by answering the question: what is the impact to the health of the overall network if we compress a block of layers into a single layer. We propose a suite of triage methods and compare them on problem spaces of varying complexity. We ultimately show that, across these problem spaces, deep net triage is able to indicate the of relative importance of different layers. Surprisingly, our local structural compression technique also leads to an improvement in overall accuracy when the final model is fine-tuned globally.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: The paper presents analytical results comparing various initialization and training methods of transferring knowledge from VGG network to a smaller student network by replacing blocks of layers with single layers. Review: The paper is well written and explains the experiments well, however the results are somehow obvious. The method also lacks novelty and looks like a generalized version of fitnets. The insights into initialization schemes are valuable, however I don*t think this is enough for conference track. Comments: Intro mentions that shortcut connections overcome overfitting whereas they are important for preserving signal flow and enabling training of deeper architectures. Page4: fine-tine->fine-tune Details of the experimental setup are missing e.g. optimizer.",
"role": "assistant"
}
]
| [
" Summary: The paper presents analytical results comparing various initialization and training methods of transferring knowledge from VGG network to a smaller student network by replacing blocks of layers with single layers.",
"Review: The paper is well written and explains the experiments well, however the results are somehow obvious.",
"The method also lacks novelty and looks like a generalized version of fitnets.",
"The insights into initialization schemes are valuable, however I don*t think this is enough for conference track.",
"Comments: Intro mentions that shortcut connections overcome overfitting whereas they are important for preserving signal flow and enabling training of deeper architectures.",
"Page4: fine-tine->fine-tune Details of the experimental setup are missing e.g.",
"optimizer."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 0,
"total": 7
} | 0.571429 | 0.142857 | 0.142857 | 0.714286 | 0.285714 | 0.285714 | 0.285714 | 0 | 7 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2.428571 | 1.860404 | 0.568167 |
Attribute-aware Collaborative Filtering: Survey and Classification | Attribute-aware CF models aims at rating prediction given not only the historical rating from users to items, but also the information associated with users (e.g. age), items (e.g. price), or even ratings (e.g. rating time). This paper surveys works in the past decade developing attribute-aware CF systems, and discovered that mathematically they can be classified into four different categories. We provide the readers not only the high level mathematical interpretation of the existing works in this area but also the mathematical insight for each category of models. Finally we provide our preliminary experiment results comparing the effectiveness of the major works in each category. | This paper surveys models for collaborative filtering with user/item covariate. Overall the authors seems to have captured the essence of a large number of popular CF models and I found that the proposed model classification is reasonable. The notation also make it easy to understand the differences between different models. In that sense this paper could be useful to researchers wanting to better understand this field. It may also be useful to develop further insights into current models (although the authors do not go that route). The impact of this paper may be limited in this community since it is a survey about a fairly niche topic (a subset of recommender systems) that may not be of central interest at ICLR. Overall, I think this paper would be a better fit in a recsys, applied ML or information retrieval journal. A few comments: I find that there are several ways the paper could make a stronger contribution: 1) Use the unifying notation to discuss strengths and weaknesses of current approaches (ideally with insights about possible future approaches). 2) Report the results of a large study of many of the surveyed models on a large number of datasets. Ideally further insights could be derived from these results. 3) Provide a common code framework with all methods 4) Add a discussion on more structured sources of covariates (e.g., social networks). This could probably more or less easily be added as a subsection using the current classification. - A similar classification of collaborative filtering models with covariates is proposed in this thesis (p.41): https://tspace.library.utoronto.ca/bitstream/1807/68831/1/Charlin_Laurent_201406_PhD_thesis.pdf - The paper is well written overall but the current version of the paper contains several typos. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAttribute-aware Collaborative Filtering: Survey and Classification\nAttribute-aware CF models aims at rating prediction given not only the historical rating from users to items, but also the information associated with users (e.g. age), items (e.g. price), or even ratings (e.g. rating time). This paper surveys works in the past decade developing attribute-aware CF systems, and discovered that mathematically they can be classified into four different categories. We provide the readers not only the high level mathematical interpretation of the existing works in this area but also the mathematical insight for each category of models. Finally we provide our preliminary experiment results comparing the effectiveness of the major works in each category.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper surveys models for collaborative filtering with user/item covariate. Overall the authors seems to have captured the essence of a large number of popular CF models and I found that the proposed model classification is reasonable. The notation also make it easy to understand the differences between different models. In that sense this paper could be useful to researchers wanting to better understand this field. It may also be useful to develop further insights into current models (although the authors do not go that route). The impact of this paper may be limited in this community since it is a survey about a fairly niche topic (a subset of recommender systems) that may not be of central interest at ICLR. Overall, I think this paper would be a better fit in a recsys, applied ML or information retrieval journal. A few comments: I find that there are several ways the paper could make a stronger contribution: 1) Use the unifying notation to discuss strengths and weaknesses of current approaches (ideally with insights about possible future approaches). 2) Report the results of a large study of many of the surveyed models on a large number of datasets. Ideally further insights could be derived from these results. 3) Provide a common code framework with all methods 4) Add a discussion on more structured sources of covariates (e.g., social networks). This could probably more or less easily be added as a subsection using the current classification. - A similar classification of collaborative filtering models with covariates is proposed in this thesis (p.41): https://tspace.library.utoronto.ca/bitstream/1807/68831/1/Charlin_Laurent_201406_PhD_thesis.pdf - The paper is well written overall but the current version of the paper contains several typos.",
"role": "assistant"
}
]
| [
" This paper surveys models for collaborative filtering with user/item covariate.",
"Overall the authors seems to have captured the essence of a large number of popular CF models and I found that the proposed model classification is reasonable.",
"The notation also make it easy to understand the differences between different models.",
"In that sense this paper could be useful to researchers wanting to better understand this field.",
"It may also be useful to develop further insights into current models (although the authors do not go that route).",
"The impact of this paper may be limited in this community since it is a survey about a fairly niche topic (a subset of recommender systems) that may not be of central interest at ICLR.",
"Overall, I think this paper would be a better fit in a recsys, applied ML or information retrieval journal.",
"A few comments: I find that there are several ways the paper could make a stronger contribution: 1) Use the unifying notation to discuss strengths and weaknesses of current approaches (ideally with insights about possible future approaches).",
"2) Report the results of a large study of many of the surveyed models on a large number of datasets.",
"Ideally further insights could be derived from these results.",
"3) Provide a common code framework with all methods 4) Add a discussion on more structured sources of covariates (e.g., social networks).",
"This could probably more or less easily be added as a subsection using the current classification.",
"- A similar classification of collaborative filtering models with covariates is proposed in this thesis (p.41): https://tspace.library.utoronto.ca/bitstream/1807/68831/1/Charlin_Laurent_201406_PhD_thesis.pdf - The paper is well written overall but the current version of the paper contains several typos."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 10,
"praise": 2,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 9,
"total": 13
} | 0.076923 | 0.076923 | 0.307692 | 0.769231 | 0.153846 | 0.307692 | 0.230769 | 0.692308 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.615385 | 2.615385 | 0 |
Achieving Strong Regularization for Deep Neural Networks | L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks. To understand this phenomenon, we investigate how and why training fails for strong regularization. Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast. We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it. We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level. Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets. Our source code is published. | The work was prompted by an interesting observation: a phase transition can be observed in deep learning with stochastic gradient descent and Tikhonov regularization. When the regularization parameter exceeds a (data-dependent) threshold, the parameters of the model are driven to zero, thereby preventing any learning. The authors then propose to moderate this problem by letting the regularization parameter to be zero for 5 to 10 epochs, and then applying the "strong" penalty parameter. In their experimental results, the phase transition is not observed anymore with their protocol. This leads to better performances, by using penalty parameters that would have prevent learning with the usual protocol. The problem targeted is important, in the sense that it reveals that some of the difficulties related to non-convexity and the use of SGD that are often overlooked. The proposed protocol is reported to work well, but since it is really ad hoc, it fails to convince the reader that it provides the right solution to the problem. I would have found much more satisfactory to either address the initialization issue by a proper warm-start strategy, or to explore standard optimization tools such as constrained optimization (i.e. Ivanov regularization) , that could be for example implemented by stochastic projected gradient or barrier functions. I think that the problem would be better handled that way than with the proposed strategy, which seems to rely only on a rather limited amount of experiments, and which may prove to be inefficient when dealing with big databases. To summarize, I believe that the paper addresses an important point, but that the tools advocated are really rudimentary compared with what has been already proposed elsewhere. Details : - there is a typo in the definition of the proximal operator in Eq. (9) - there are many unsubstantiated speculations in the comments of the experimental section that do not add value to the paper - the figure showing the evolution of the magnitude of parameters arrives too late and could be completed by the evolution of the data-fitting term of the training criterion | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAchieving Strong Regularization for Deep Neural Networks\nL1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks. To understand this phenomenon, we investigate how and why training fails for strong regularization. Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast. We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it. We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level. Experiment results show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets. Our source code is published.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The work was prompted by an interesting observation: a phase transition can be observed in deep learning with stochastic gradient descent and Tikhonov regularization. When the regularization parameter exceeds a (data-dependent) threshold, the parameters of the model are driven to zero, thereby preventing any learning. The authors then propose to moderate this problem by letting the regularization parameter to be zero for 5 to 10 epochs, and then applying the \"strong\" penalty parameter. In their experimental results, the phase transition is not observed anymore with their protocol. This leads to better performances, by using penalty parameters that would have prevent learning with the usual protocol. The problem targeted is important, in the sense that it reveals that some of the difficulties related to non-convexity and the use of SGD that are often overlooked. The proposed protocol is reported to work well, but since it is really ad hoc, it fails to convince the reader that it provides the right solution to the problem. I would have found much more satisfactory to either address the initialization issue by a proper warm-start strategy, or to explore standard optimization tools such as constrained optimization (i.e. Ivanov regularization) , that could be for example implemented by stochastic projected gradient or barrier functions. I think that the problem would be better handled that way than with the proposed strategy, which seems to rely only on a rather limited amount of experiments, and which may prove to be inefficient when dealing with big databases. To summarize, I believe that the paper addresses an important point, but that the tools advocated are really rudimentary compared with what has been already proposed elsewhere. Details : - there is a typo in the definition of the proximal operator in Eq. (9) - there are many unsubstantiated speculations in the comments of the experimental section that do not add value to the paper - the figure showing the evolution of the magnitude of parameters arrives too late and could be completed by the evolution of the data-fitting term of the training criterion",
"role": "assistant"
}
]
| [
" The work was prompted by an interesting observation: a phase transition can be observed in deep learning with stochastic gradient descent and Tikhonov regularization.",
"When the regularization parameter exceeds a (data-dependent) threshold, the parameters of the model are driven to zero, thereby preventing any learning.",
"The authors then propose to moderate this problem by letting the regularization parameter to be zero for 5 to 10 epochs, and then applying the \"strong\" penalty parameter.",
"In their experimental results, the phase transition is not observed anymore with their protocol.",
"This leads to better performances, by using penalty parameters that would have prevent learning with the usual protocol.",
"The problem targeted is important, in the sense that it reveals that some of the difficulties related to non-convexity and the use of SGD that are often overlooked.",
"The proposed protocol is reported to work well, but since it is really ad hoc, it fails to convince the reader that it provides the right solution to the problem.",
"I would have found much more satisfactory to either address the initialization issue by a proper warm-start strategy, or to explore standard optimization tools such as constrained optimization (i.e.",
"Ivanov regularization) , that could be for example implemented by stochastic projected gradient or barrier functions.",
"I think that the problem would be better handled that way than with the proposed strategy, which seems to rely only on a rather limited amount of experiments, and which may prove to be inefficient when dealing with big databases.",
"To summarize, I believe that the paper addresses an important point, but that the tools advocated are really rudimentary compared with what has been already proposed elsewhere.",
"Details : - there is a typo in the definition of the proximal operator in Eq.",
"(9) - there are many unsubstantiated speculations in the comments of the experimental section that do not add value to the paper - the figure showing the evolution of the magnitude of parameters arrives too late and could be completed by the evolution of the data-fitting term of the training criterion"
] | {
"criticism": 4,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 8,
"suggestion_and_solution": 5,
"total": 13
} | 0.307692 | 0 | 0.230769 | 0.846154 | 0.153846 | 0.076923 | 0.615385 | 0.384615 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.615385 | 2.615385 | 0 |
COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS | Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data. | The paper proposes a new way of integrating a language model into a seq2seq network: instead of adding the language model only during decoding, the model has access to a pretrained language model during training too. This makes the training and testing conditions more similar. Moreover, only the logits of the pretrained language model are used, making it possible to swap language models post-training. The experiments show that the proposed language model fusion is effective, and works well even when different, domain-dependent language models are used during training and testing. Further experiments indicate that through the integration of a language model at training time the seq2seq*s decoder can be smaller as it is relieved of language modeling. Quality: The paper is well executed, the experiments do basic validation of the model (ablation plus a specially designed task to show model effectiveness) Clarity: Well written, easy to understand. Originality: The main idea is new. Significance: Better language model integration and easier adaptation to new domains of seq2seq models is important. Pros and cons: pros : see above cons: My problem with the paper is lack of experiments on public datasets. The efficacy of the method is shown on only one task on a proprietary corpus engineered for domain mismatch and the method may be not so efficient under other circumstances. Besides presenting results on publicly available data, the paper would also be improved by adding a baseline in which the logits of the language model are added to the logits of the seq2seq decoder at training time. Similarly to cold-fusion, this baseline also allows swapping of language models at test time. In contrast, the baselines presented in the paper are weaker because they don*t use a language model during training time. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nCOLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELS\nSequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes a new way of integrating a language model into a seq2seq network: instead of adding the language model only during decoding, the model has access to a pretrained language model during training too. This makes the training and testing conditions more similar. Moreover, only the logits of the pretrained language model are used, making it possible to swap language models post-training. The experiments show that the proposed language model fusion is effective, and works well even when different, domain-dependent language models are used during training and testing. Further experiments indicate that through the integration of a language model at training time the seq2seq*s decoder can be smaller as it is relieved of language modeling. Quality: The paper is well executed, the experiments do basic validation of the model (ablation plus a specially designed task to show model effectiveness) Clarity: Well written, easy to understand. Originality: The main idea is new. Significance: Better language model integration and easier adaptation to new domains of seq2seq models is important. Pros and cons: pros : see above cons: My problem with the paper is lack of experiments on public datasets. The efficacy of the method is shown on only one task on a proprietary corpus engineered for domain mismatch and the method may be not so efficient under other circumstances. Besides presenting results on publicly available data, the paper would also be improved by adding a baseline in which the logits of the language model are added to the logits of the seq2seq decoder at training time. Similarly to cold-fusion, this baseline also allows swapping of language models at test time. In contrast, the baselines presented in the paper are weaker because they don*t use a language model during training time.",
"role": "assistant"
}
]
| [
" The paper proposes a new way of integrating a language model into a seq2seq network: instead of adding the language model only during decoding, the model has access to a pretrained language model during training too.",
"This makes the training and testing conditions more similar.",
"Moreover, only the logits of the pretrained language model are used, making it possible to swap language models post-training.",
"The experiments show that the proposed language model fusion is effective, and works well even when different, domain-dependent language models are used during training and testing.",
"Further experiments indicate that through the integration of a language model at training time the seq2seq*s decoder can be smaller as it is relieved of language modeling.",
"Quality: The paper is well executed, the experiments do basic validation of the model (ablation plus a specially designed task to show model effectiveness) Clarity: Well written, easy to understand.",
"Originality: The main idea is new.",
"Significance: Better language model integration and easier adaptation to new domains of seq2seq models is important.",
"Pros and cons: pros : see above cons: My problem with the paper is lack of experiments on public datasets.",
"The efficacy of the method is shown on only one task on a proprietary corpus engineered for domain mismatch and the method may be not so efficient under other circumstances.",
"Besides presenting results on publicly available data, the paper would also be improved by adding a baseline in which the logits of the language model are added to the logits of the seq2seq decoder at training time.",
"Similarly to cold-fusion, this baseline also allows swapping of language models at test time.",
"In contrast, the baselines presented in the paper are weaker because they don*t use a language model during training time."
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 4,
"presentation_and_reporting": 6,
"results_and_discussion": 1,
"suggestion_and_solution": 2,
"total": 13
} | 0.153846 | 0 | 0.076923 | 0.923077 | 0.307692 | 0.461538 | 0.076923 | 0.153846 | 13 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.153846 | 2.153846 | 0 |
Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network | In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN*s predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution. | Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the "luckiness" of the license plate number. The authors mentions other papers that use NN*s to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings. Then the paper goes on to present the model which is just a vanilla RNN, with standard practices like batch normalization and dropout. The proposed pipeline converts each character to an embedding with the only sentence of description being "Each character is converted by a lookup table to a vector representation, known as character embedding." Specifics of the data, RNN training, and the results as well as the stability of the network to hyperparameters is also examined. Finally they find a "a feature vector for each plate by summing up the output of the last recurrent layer overtime." and the use knn on these features to find other plates that are grouped together to try to explain how the RNN predicts the prices of the plates. In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better. Specific Comments on Sections: Comments: Sec 1,2 In these sections the author has somewhat odd references to specific economists that seem a little off topic, and spends a little too much time in my opinion setting up this specific data. Sec 3 The author does not mention the following reference: "Deep learning for stock prediction using numerical and textual information" by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks. Sec 4 What are the characters embedded with? This is important to specify. Is it Word2vec or something else? What does the lookup table consist of? References should be added to the relevant methods. Sec 5 I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison. LSTMs as well could have been a point of comparison. Sec 6 Nothing too insightful is said about the RNN Model. Sec 7 The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table. Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN. However comparisons are lacking and the paper is not presented very scientifically. The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPredicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network\nIn Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN*s predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the \"luckiness\" of the license plate number. The authors mentions other papers that use NN*s to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings. Then the paper goes on to present the model which is just a vanilla RNN, with standard practices like batch normalization and dropout. The proposed pipeline converts each character to an embedding with the only sentence of description being \"Each character is converted by a lookup table to a vector representation, known as character embedding.\" Specifics of the data, RNN training, and the results as well as the stability of the network to hyperparameters is also examined. Finally they find a \"a feature vector for each plate by summing up the output of the last recurrent layer overtime.\" and the use knn on these features to find other plates that are grouped together to try to explain how the RNN predicts the prices of the plates. In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better. Specific Comments on Sections: Comments: Sec 1,2 In these sections the author has somewhat odd references to specific economists that seem a little off topic, and spends a little too much time in my opinion setting up this specific data. Sec 3 The author does not mention the following reference: \"Deep learning for stock prediction using numerical and textual information\" by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks. Sec 4 What are the characters embedded with? This is important to specify. Is it Word2vec or something else? What does the lookup table consist of? References should be added to the relevant methods. Sec 5 I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison. LSTMs as well could have been a point of comparison. Sec 6 Nothing too insightful is said about the RNN Model. Sec 7 The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table. Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN. However comparisons are lacking and the paper is not presented very scientifically. The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well.",
"role": "assistant"
}
]
| [
" Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the \"luckiness\" of the license plate number.",
"The authors mentions other papers that use NN*s to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings.",
"Then the paper goes on to present the model which is just a vanilla RNN, with standard practices like batch normalization and dropout.",
"The proposed pipeline converts each character to an embedding with the only sentence of description being \"Each character is converted by a lookup table to a vector representation, known as character embedding.\"",
"Specifics of the data, RNN training, and the results as well as the stability of the network to hyperparameters is also examined.",
"Finally they find a \"a feature vector for each plate by summing up the output of the last recurrent layer overtime.\"",
"and the use knn on these features to find other plates that are grouped together to try to explain how the RNN predicts the prices of the plates.",
"In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better.",
"Specific Comments on Sections: Comments: Sec 1,2 In these sections the author has somewhat odd references to specific economists that seem a little off topic, and spends a little too much time in my opinion setting up this specific data.",
"Sec 3 The author does not mention the following reference: \"Deep learning for stock prediction using numerical and textual information\" by Akita et al.",
"that does incorporate non-numerical info to predict stock prices with deep networks.",
"Sec 4 What are the characters embedded with?",
"This is important to specify.",
"Is it Word2vec or something else?",
"What does the lookup table consist of?",
"References should be added to the relevant methods.",
"Sec 5 I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison.",
"LSTMs as well could have been a point of comparison.",
"Sec 6 Nothing too insightful is said about the RNN Model.",
"Sec 7 The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table.",
"Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN.",
"However comparisons are lacking and the paper is not presented very scientifically.",
"The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well."
] | {
"criticism": 7,
"example": 9,
"importance_and_relevance": 0,
"materials_and_methods": 16,
"praise": 1,
"presentation_and_reporting": 12,
"results_and_discussion": 3,
"suggestion_and_solution": 5,
"total": 23
} | 0.304348 | 0.391304 | 0 | 0.695652 | 0.043478 | 0.521739 | 0.130435 | 0.217391 | 23 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.304348 | 0.726105 | 1.578243 |
Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines | Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks. | In this paper, the authors investigate variance reduction techniques for agents with multi-dimensional policy outputs, in particular when they are conditionally independent (*factored*). With the increasing focus on applying RL methods to continuous control problems and RTS type games, this is an important problem and this technique seems like an important addition to the RL toolbox. The paper is well written, the method is easy to implement, and the algorithm seems to have clear positive impact on the presented experiments. - The derivations in pages 4-6 are somewhat disconnected from the rest of the paper: the optimal baseline derivation is very standard (even if adapted to the slightly different situation situated here), and for reasons highlighted by the authors in this paper, they are not often used; the *marginalized* baseline is more common, and indeed, the authors adopt this one as well. In light of this (and of the paper being quite a bit over the page limit)- is this material (4.2->4.4) mostly not better suited for the appendix? Same for section 4.6 (which I believe is not used in the experiments). - The experimental section is very strong; regarding the partial observability experiments, assuming actions are here factored as well, I could see four baselines (two choices for whether the baseline has access to the goal location or not, and two choices for whether the baseline has access to the vector ). It*s not clear which two baselines are depicted in 5b - is it possible to disentangle the effect of providing and the location of the hole to the baseline? (side note: it is an interesting idea to include information not available to the agent as input to the baseline though it does feel a bit *iffy* ; the agent requires information to train, but is not provided the information to act. Out of curiosity, is it intended as an experiment to verify the need for better baselines? Or as a *fair* training procedure?) - Minor: in equation 2- is the correct exponent not t*? Also since is define with a scaling (to make it an actual distribution), I believe the definition of should also be multiplied by (as well as equation 2). | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nVariance Reduction for Policy Gradient with Action-Dependent Factorized Baselines\nPolicy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors investigate variance reduction techniques for agents with multi-dimensional policy outputs, in particular when they are conditionally independent (*factored*). With the increasing focus on applying RL methods to continuous control problems and RTS type games, this is an important problem and this technique seems like an important addition to the RL toolbox. The paper is well written, the method is easy to implement, and the algorithm seems to have clear positive impact on the presented experiments. - The derivations in pages 4-6 are somewhat disconnected from the rest of the paper: the optimal baseline derivation is very standard (even if adapted to the slightly different situation situated here), and for reasons highlighted by the authors in this paper, they are not often used; the *marginalized* baseline is more common, and indeed, the authors adopt this one as well. In light of this (and of the paper being quite a bit over the page limit)- is this material (4.2->4.4) mostly not better suited for the appendix? Same for section 4.6 (which I believe is not used in the experiments). - The experimental section is very strong; regarding the partial observability experiments, assuming actions are here factored as well, I could see four baselines (two choices for whether the baseline has access to the goal location or not, and two choices for whether the baseline has access to the vector ). It*s not clear which two baselines are depicted in 5b - is it possible to disentangle the effect of providing and the location of the hole to the baseline? (side note: it is an interesting idea to include information not available to the agent as input to the baseline though it does feel a bit *iffy* ; the agent requires information to train, but is not provided the information to act. Out of curiosity, is it intended as an experiment to verify the need for better baselines? Or as a *fair* training procedure?) - Minor: in equation 2- is the correct exponent not t*? Also since is define with a scaling (to make it an actual distribution), I believe the definition of should also be multiplied by (as well as equation 2).",
"role": "assistant"
}
]
| [
" In this paper, the authors investigate variance reduction techniques for agents with multi-dimensional policy outputs, in particular when they are conditionally independent (*factored*).",
"With the increasing focus on applying RL methods to continuous control problems and RTS type games, this is an important problem and this technique seems like an important addition to the RL toolbox.",
"The paper is well written, the method is easy to implement, and the algorithm seems to have clear positive impact on the presented experiments.",
"- The derivations in pages 4-6 are somewhat disconnected from the rest of the paper: the optimal baseline derivation is very standard (even if adapted to the slightly different situation situated here), and for reasons highlighted by the authors in this paper, they are not often used; the *marginalized* baseline is more common, and indeed, the authors adopt this one as well.",
"In light of this (and of the paper being quite a bit over the page limit)- is this material (4.2->4.4) mostly not better suited for the appendix?",
"Same for section 4.6 (which I believe is not used in the experiments).",
"- The experimental section is very strong; regarding the partial observability experiments, assuming actions are here factored as well, I could see four baselines (two choices for whether the baseline has access to the goal location or not, and two choices for whether the baseline has access to the vector ).",
"It*s not clear which two baselines are depicted in 5b - is it possible to disentangle the effect of providing and the location of the hole to the baseline?",
"(side note: it is an interesting idea to include information not available to the agent as input to the baseline though it does feel a bit *iffy* ; the agent requires information to train, but is not provided the information to act.",
"Out of curiosity, is it intended as an experiment to verify the need for better baselines?",
"Or as a *fair* training procedure?)",
"- Minor: in equation 2- is the correct exponent not t*?",
"Also since is define with a scaling (to make it an actual distribution), I believe the definition of should also be multiplied by (as well as equation 2)."
] | {
"criticism": 4,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 11,
"praise": 3,
"presentation_and_reporting": 5,
"results_and_discussion": 3,
"suggestion_and_solution": 2,
"total": 13
} | 0.307692 | 0.153846 | 0.230769 | 0.846154 | 0.230769 | 0.384615 | 0.230769 | 0.153846 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.538462 | 2.538462 | 0 |
Convolutional Sequence Modeling Revisited | This paper revisits the problem of sequence modeling using convolutional architectures. Although both convolutional and recurrent architectures have a long history in sequence prediction, the current "default" mindset in much of the deep learning community is that generic sequence modeling is best handled using recurrent networks. The goal of this paper is to question this assumption. Specifically, we consider a simple generic temporal convolution network (TCN), which adopts features from modern ConvNet architectures such as a dilations and residual connections. We show that on a variety of sequence modeling tasks, including many frequently used as benchmarks for evaluating recurrent networks, the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and sometimes even highly specialized approaches. We further show that the potential "infinite memory" advantage that RNNs have over TCNs is largely absent in practice: TCNs indeed exhibit longer effective history sizes than their recurrent counterparts. As a whole, we argue that it may be time to (re)consider ConvNets as the default "go to" architecture for sequence modeling. | In this paper, the authors argue for the use of convolutional architectures as a general purpose tool for sequence modeling. They start by proposing a generic temporal convolution sequence model which leverages recent advances in the field, discuss the respective advantages of convolutional and recurrent networks, and benchmark their architecture on a number of different tasks. The paper is clearly written and easy to follow, does a good job of presenting both the advantages and disadvantages of the proposed method, and convincingly makes the point that convolutional architectures should at least be considered for any sequence modeling task; they are indeed still often overlooked, in spite of some strong performances in language modeling and translation in recent works. The only part which is slightly less convincing is the section about effective memory size. While it is true that learning longer term dependencies can be difficult in standard RNN architectures, it is interesting to notice that the SoTA results presented in appendix B.3 for language modeling on larger data sets are architectures which focus on remedying this difficulty (cache model and hierarchical LSTM). It would also be interesting to see how TCN works on word prediction tasks which are devised explicitly to test for longer memory, such as Lambada (1) or Children Books Test (2). As a minor point, adding a measure of complexity in terms of number of operations could be a useful hardware-independent indication of the computational cost of the architecture. Pros: - Clearly written, well executed paper - Makes a strong point for the use of convolutional architecture for sequences - Provides useful benchmarks for the community Cons: - The claims on effective memory size need more context and justification 1: The LAMBADA dataset: Word prediction requiring a broad discourse context, Paperno et al. 2016 2: The Goldilocks principle: reading children*s books with explicit memory representation, Hill et al. 2016 | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nConvolutional Sequence Modeling Revisited\nThis paper revisits the problem of sequence modeling using convolutional architectures. Although both convolutional and recurrent architectures have a long history in sequence prediction, the current \"default\" mindset in much of the deep learning community is that generic sequence modeling is best handled using recurrent networks. The goal of this paper is to question this assumption. Specifically, we consider a simple generic temporal convolution network (TCN), which adopts features from modern ConvNet architectures such as a dilations and residual connections. We show that on a variety of sequence modeling tasks, including many frequently used as benchmarks for evaluating recurrent networks, the TCN outperforms baseline RNN methods (LSTMs, GRUs, and vanilla RNNs) and sometimes even highly specialized approaches. We further show that the potential \"infinite memory\" advantage that RNNs have over TCNs is largely absent in practice: TCNs indeed exhibit longer effective history sizes than their recurrent counterparts. As a whole, we argue that it may be time to (re)consider ConvNets as the default \"go to\" architecture for sequence modeling.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors argue for the use of convolutional architectures as a general purpose tool for sequence modeling. They start by proposing a generic temporal convolution sequence model which leverages recent advances in the field, discuss the respective advantages of convolutional and recurrent networks, and benchmark their architecture on a number of different tasks. The paper is clearly written and easy to follow, does a good job of presenting both the advantages and disadvantages of the proposed method, and convincingly makes the point that convolutional architectures should at least be considered for any sequence modeling task; they are indeed still often overlooked, in spite of some strong performances in language modeling and translation in recent works. The only part which is slightly less convincing is the section about effective memory size. While it is true that learning longer term dependencies can be difficult in standard RNN architectures, it is interesting to notice that the SoTA results presented in appendix B.3 for language modeling on larger data sets are architectures which focus on remedying this difficulty (cache model and hierarchical LSTM). It would also be interesting to see how TCN works on word prediction tasks which are devised explicitly to test for longer memory, such as Lambada (1) or Children Books Test (2). As a minor point, adding a measure of complexity in terms of number of operations could be a useful hardware-independent indication of the computational cost of the architecture. Pros: - Clearly written, well executed paper - Makes a strong point for the use of convolutional architecture for sequences - Provides useful benchmarks for the community Cons: - The claims on effective memory size need more context and justification 1: The LAMBADA dataset: Word prediction requiring a broad discourse context, Paperno et al. 2016 2: The Goldilocks principle: reading children*s books with explicit memory representation, Hill et al. 2016",
"role": "assistant"
}
]
| [
" In this paper, the authors argue for the use of convolutional architectures as a general purpose tool for sequence modeling.",
"They start by proposing a generic temporal convolution sequence model which leverages recent advances in the field, discuss the respective advantages of convolutional and recurrent networks, and benchmark their architecture on a number of different tasks.",
"The paper is clearly written and easy to follow, does a good job of presenting both the advantages and disadvantages of the proposed method, and convincingly makes the point that convolutional architectures should at least be considered for any sequence modeling task; they are indeed still often overlooked, in spite of some strong performances in language modeling and translation in recent works.",
"The only part which is slightly less convincing is the section about effective memory size.",
"While it is true that learning longer term dependencies can be difficult in standard RNN architectures, it is interesting to notice that the SoTA results presented in appendix B.3 for language modeling on larger data sets are architectures which focus on remedying this difficulty (cache model and hierarchical LSTM).",
"It would also be interesting to see how TCN works on word prediction tasks which are devised explicitly to test for longer memory, such as Lambada (1) or Children Books Test (2).",
"As a minor point, adding a measure of complexity in terms of number of operations could be a useful hardware-independent indication of the computational cost of the architecture.",
"Pros: - Clearly written, well executed paper - Makes a strong point for the use of convolutional architecture for sequences - Provides useful benchmarks for the community Cons: - The claims on effective memory size need more context and justification 1: The LAMBADA dataset: Word prediction requiring a broad discourse context, Paperno et al.",
"2016 2: The Goldilocks principle: reading children*s books with explicit memory representation, Hill et al.",
"2016"
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 5,
"materials_and_methods": 7,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 1,
"suggestion_and_solution": 4,
"total": 10
} | 0.2 | 0 | 0.5 | 0.7 | 0.3 | 0.3 | 0.1 | 0.4 | 10 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.5 | 2.357958 | 0.142042 |
Generating Natural Adversarial Examples | Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers. | Summary: A method for creation of semantical adversary examples in suggested. The ‘semantic’ property is measured by building a latent space with mapping from this space to the observable (generator) and back (inverter). The generator is trained with a WGAN optimization. Semantic adversarials examples are them searched for by inverting an example to its sematic encoding and running local search around it in that space. The method is tested for generation of images on MNist and part of LSUM data and for creation of text examples which are adversarial in some sense to inference and translation sentences. It is shown that the distance between adversarial example and the original example in the latent space is proportional to the accuracy of the classifier inspected. Page 3: It seems that the search algorithm has a additional parameter: r_0, the size of the area in which search is initiated. This should be explicitly said and the parameter value should be stated. Page 4: - the implementation details of the generator, critic and invertor networks are not given in enough details, and instead the reader is referred to other papers. This makes this paper non-clear as a stand alone document, and is a problem for a paper which is mostly based on experiments and their results: the main networks used are not described. - the visual examples are interesting, but it seems that they are able to find good natural adversary examples only for a weak classifier. In the MNist case, the examples for thr random forest are nautral and surprising, but those for the LE-Net are often not: they often look as if they indeed belong to the other class (the one pointed by the classifier). In the churce-vs. tower case, a relatively weak MLP classifier was used. It would be more instructive to see the results for a better, convolutional classifier. Page 5: - the description of the various networks used for text generation is insufficient for understanding: o The AREA is described in two sentences. It is not clear how this module is built, was loss was it used to optimize in the first place, and what elements of it are re0used for the current task o ‘inverter’ here is used in a sense which is different than in previous sections of the paper: earlier it denoted the mapping from output (images) to the underlying latent space. Here it denote a mapping between two latent spaces. o It is not clear what the ‘four-layers strided CNN’ is: its structure, its role in the system. How is it optimized? o In general: a block diagram showing the relation between all the system’s components may be useful, plus the details about the structure and optimization of the various modules. It seems that the system here contains 5 modules instead of the three used before (critic, generator and inverter), but this is not clear enough. Also which modules are pre-trained, which are optimized together,a nd which are optimized separately is not clear. o SNLI data should be described: content, size, the task it is used for Pro: - A novel idea of producing natural adversary examples with a GAN - The generated examples are in some cases useful for interpretation and network understanding - The method enables creation of adversarial examples for block box classifiers Cons - The idea implementation is basic. Specifically search algorithm presented is quite simplistic, and no variations other than plain local search were developed and tested - The generated adversarial examples created for successful complex classifiers are often not impressive and useful (they are either not semantical, or semantical but correctly classified by the classifier). Hence It is not clear if the latent space used by the method enables finding of interesting adversarial examples for accurate classifiers. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGenerating Natural Adversarial Examples\nDue to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Summary: A method for creation of semantical adversary examples in suggested. The ‘semantic’ property is measured by building a latent space with mapping from this space to the observable (generator) and back (inverter). The generator is trained with a WGAN optimization. Semantic adversarials examples are them searched for by inverting an example to its sematic encoding and running local search around it in that space. The method is tested for generation of images on MNist and part of LSUM data and for creation of text examples which are adversarial in some sense to inference and translation sentences. It is shown that the distance between adversarial example and the original example in the latent space is proportional to the accuracy of the classifier inspected. Page 3: It seems that the search algorithm has a additional parameter: r_0, the size of the area in which search is initiated. This should be explicitly said and the parameter value should be stated. Page 4: - the implementation details of the generator, critic and invertor networks are not given in enough details, and instead the reader is referred to other papers. This makes this paper non-clear as a stand alone document, and is a problem for a paper which is mostly based on experiments and their results: the main networks used are not described. - the visual examples are interesting, but it seems that they are able to find good natural adversary examples only for a weak classifier. In the MNist case, the examples for thr random forest are nautral and surprising, but those for the LE-Net are often not: they often look as if they indeed belong to the other class (the one pointed by the classifier). In the churce-vs. tower case, a relatively weak MLP classifier was used. It would be more instructive to see the results for a better, convolutional classifier. Page 5: - the description of the various networks used for text generation is insufficient for understanding: o The AREA is described in two sentences. It is not clear how this module is built, was loss was it used to optimize in the first place, and what elements of it are re0used for the current task o ‘inverter’ here is used in a sense which is different than in previous sections of the paper: earlier it denoted the mapping from output (images) to the underlying latent space. Here it denote a mapping between two latent spaces. o It is not clear what the ‘four-layers strided CNN’ is: its structure, its role in the system. How is it optimized? o In general: a block diagram showing the relation between all the system’s components may be useful, plus the details about the structure and optimization of the various modules. It seems that the system here contains 5 modules instead of the three used before (critic, generator and inverter), but this is not clear enough. Also which modules are pre-trained, which are optimized together,a nd which are optimized separately is not clear. o SNLI data should be described: content, size, the task it is used for Pro: - A novel idea of producing natural adversary examples with a GAN - The generated examples are in some cases useful for interpretation and network understanding - The method enables creation of adversarial examples for block box classifiers Cons - The idea implementation is basic. Specifically search algorithm presented is quite simplistic, and no variations other than plain local search were developed and tested - The generated adversarial examples created for successful complex classifiers are often not impressive and useful (they are either not semantical, or semantical but correctly classified by the classifier). Hence It is not clear if the latent space used by the method enables finding of interesting adversarial examples for accurate classifiers.",
"role": "assistant"
}
]
| [
" Summary: A method for creation of semantical adversary examples in suggested.",
"The ‘semantic’ property is measured by building a latent space with mapping from this space to the observable (generator) and back (inverter).",
"The generator is trained with a WGAN optimization.",
"Semantic adversarials examples are them searched for by inverting an example to its sematic encoding and running local search around it in that space.",
"The method is tested for generation of images on MNist and part of LSUM data and for creation of text examples which are adversarial in some sense to inference and translation sentences.",
"It is shown that the distance between adversarial example and the original example in the latent space is proportional to the accuracy of the classifier inspected.",
"Page 3: It seems that the search algorithm has a additional parameter: r_0, the size of the area in which search is initiated.",
"This should be explicitly said and the parameter value should be stated.",
"Page 4: - the implementation details of the generator, critic and invertor networks are not given in enough details, and instead the reader is referred to other papers.",
"This makes this paper non-clear as a stand alone document, and is a problem for a paper which is mostly based on experiments and their results: the main networks used are not described.",
"- the visual examples are interesting, but it seems that they are able to find good natural adversary examples only for a weak classifier.",
"In the MNist case, the examples for thr random forest are nautral and surprising, but those for the LE-Net are often not: they often look as if they indeed belong to the other class (the one pointed by the classifier).",
"In the churce-vs. tower case, a relatively weak MLP classifier was used.",
"It would be more instructive to see the results for a better, convolutional classifier.",
"Page 5: - the description of the various networks used for text generation is insufficient for understanding: o The AREA is described in two sentences.",
"It is not clear how this module is built, was loss was it used to optimize in the first place, and what elements of it are re0used for the current task o ‘inverter’ here is used in a sense which is different than in previous sections of the paper: earlier it denoted the mapping from output (images) to the underlying latent space.",
"Here it denote a mapping between two latent spaces.",
"o It is not clear what the ‘four-layers strided CNN’ is: its structure, its role in the system.",
"How is it optimized?",
"o In general: a block diagram showing the relation between all the system’s components may be useful, plus the details about the structure and optimization of the various modules.",
"It seems that the system here contains 5 modules instead of the three used before (critic, generator and inverter), but this is not clear enough.",
"Also which modules are pre-trained, which are optimized together,a nd which are optimized separately is not clear.",
"o SNLI data should be described: content, size, the task it is used for Pro: - A novel idea of producing natural adversary examples with a GAN - The generated examples are in some cases useful for interpretation and network understanding - The method enables creation of adversarial examples for block box classifiers Cons - The idea implementation is basic.",
"Specifically search algorithm presented is quite simplistic, and no variations other than plain local search were developed and tested - The generated adversarial examples created for successful complex classifiers are often not impressive and useful (they are either not semantical, or semantical but correctly classified by the classifier).",
"Hence It is not clear if the latent space used by the method enables finding of interesting adversarial examples for accurate classifiers."
] | {
"criticism": 9,
"example": 3,
"importance_and_relevance": 3,
"materials_and_methods": 19,
"praise": 1,
"presentation_and_reporting": 13,
"results_and_discussion": 3,
"suggestion_and_solution": 4,
"total": 25
} | 0.36 | 0.12 | 0.12 | 0.76 | 0.04 | 0.52 | 0.12 | 0.16 | 25 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.2 | -0.07267 | 2.27267 |
DeepArchitect: Automatically Designing and Training Deep Architectures | In deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available | This paper introduces a DeepArchitect framework to build and train deep models automatically. Specifically, the authors proposes three components, i.e., model search specification language, model search algorithm, model evaluation algorithm. The paper is written well, and the proposed framework provides us with a systematical way to design deep models. However, my concern is mainly about its importance in practice. The experiments and computational modules are basic and small-scale, i.e., it may be restricted for large-scale computer vision problems. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeepArchitect: Automatically Designing and Training Deep Architectures\nIn deep learning, performance is strongly affected by the choice of architecture and hyperparameters. While there has been extensive work on automatic hyperpa- rameter optimization for simple spaces, complex spaces such as the space of deep architectures remain largely unexplored. As a result, the choice of architecture is done manually by the human expert through a slow trial and error process guided mainly by intuition. In this paper we describe a framework for automatically designing and training deep models. We propose an extensible and modular lan- guage that allows the human expert to compactly represent complex search spaces over architectures and their hyperparameters. The resulting search spaces are tree- structured and therefore easy to traverse. Models can be automatically compiled to computational graphs once values for all hyperparameters have been chosen. We can leverage the structure of the search space to introduce different model search algorithms, such as random search, Monte Carlo tree search (MCTS), and sequen- tial model-based optimization (SMBO). We present experiments comparing the different algorithms on CIFAR-10 and show that MCTS and SMBO outperform random search. We also present experiments on MNIST, showing that the same search space achieves near state-of-the-art performance with a few samples. These experiments show that our framework can be used effectively for model discov- ery, as it is possible to describe expressive search spaces and discover competitive models without much effort from the human expert. Code for our framework and experiments has been made publicly available",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper introduces a DeepArchitect framework to build and train deep models automatically. Specifically, the authors proposes three components, i.e., model search specification language, model search algorithm, model evaluation algorithm. The paper is written well, and the proposed framework provides us with a systematical way to design deep models. However, my concern is mainly about its importance in practice. The experiments and computational modules are basic and small-scale, i.e., it may be restricted for large-scale computer vision problems.",
"role": "assistant"
}
]
| [
" This paper introduces a DeepArchitect framework to build and train deep models automatically.",
"Specifically, the authors proposes three components, i.e., model search specification language, model search algorithm, model evaluation algorithm.",
"The paper is written well, and the proposed framework provides us with a systematical way to design deep models.",
"However, my concern is mainly about its importance in practice.",
"The experiments and computational modules are basic and small-scale, i.e., it may be restricted for large-scale computer vision problems."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 5
} | 0 | 0 | 0.4 | 0.8 | 0.6 | 0.4 | 0 | 0 | 5 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 2.2 | 1.189924 | 1.010076 |
Skip Connections Eliminate Singularities | Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the "ghosts" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. | This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function. The discussion is largely based on a sequence of experiments, some of which are interesting and insightful. The discussion here can be useful for other researchers. My main concern is that the result here is purely empirical, with no concrete theoretical justification. What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation. Therefore, i found many of the discussion to be questionable. I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper. Do you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues? Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper. All the conclusions are based on one or two datasets. Could you consider testing the result on more different datasets to verify if the results are generalizable? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSkip Connections Eliminate Singularities\nSkip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the \"ghosts\" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function. The discussion is largely based on a sequence of experiments, some of which are interesting and insightful. The discussion here can be useful for other researchers. My main concern is that the result here is purely empirical, with no concrete theoretical justification. What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation. Therefore, i found many of the discussion to be questionable. I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper. Do you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues? Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper. All the conclusions are based on one or two datasets. Could you consider testing the result on more different datasets to verify if the results are generalizable?",
"role": "assistant"
}
]
| [
" This paper proposes to explain the benefits of skip connections in terms of eliminating the singularities of the loss function.",
"The discussion is largely based on a sequence of experiments, some of which are interesting and insightful.",
"The discussion here can be useful for other researchers.",
"My main concern is that the result here is purely empirical, with no concrete theoretical justification.",
"What the experiments reveal is an empirical correlation between the Eigval index and training accuracy, which can be caused by lots of reasons (and cofounders), and does not necessarily establish a causal relation.",
"Therefore, i found many of the discussion to be questionable.",
"I would love to see more solid theoretical discussion to justify the hypothesis proposed in this paper.",
"Do you have a sense how accurate is the estimation of the tail probabilities of the eigenvalues?",
"Because the whole paper is based on the approximation of the eigval indexes, it is critical to exam the estimation is accurate enough to draw the conclusions in the paper.",
"All the conclusions are based on one or two datasets.",
"Could you consider testing the result on more different datasets to verify if the results are generalizable?"
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 11,
"suggestion_and_solution": 4,
"total": 11
} | 0.272727 | 0 | 0.181818 | 0.545455 | 0.181818 | 0 | 1 | 0.363636 | 11 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.545455 | 2.482325 | 0.06313 |
Skip Connections Eliminate Singularities | Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the "ghosts" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. | Paper examines the use of skip connections (including residual layers) in deep networks as a way of alleviating two perceived difficulties in training: 1) when a neuron does not contain any information, and 2) when two neurons in a layer compute the same function. Both of these cases lead to singularities in the Hessian matrix, and this work includes a number of experiments showing the effect of skip connections on the Hessian during training. This is a significant and timely topic. While I may not be the best one to judge the originality of this work, I appreciated how the authors presented clear and concise arguments with experiments to back up their claims. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nSkip Connections Eliminate Singularities\nSkip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the \"ghosts\" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Paper examines the use of skip connections (including residual layers) in deep networks as a way of alleviating two perceived difficulties in training: 1) when a neuron does not contain any information, and 2) when two neurons in a layer compute the same function. Both of these cases lead to singularities in the Hessian matrix, and this work includes a number of experiments showing the effect of skip connections on the Hessian during training. This is a significant and timely topic. While I may not be the best one to judge the originality of this work, I appreciated how the authors presented clear and concise arguments with experiments to back up their claims.",
"role": "assistant"
}
]
| [
" Paper examines the use of skip connections (including residual layers) in deep networks as a way of alleviating two perceived difficulties in training: 1) when a neuron does not contain any information, and 2) when two neurons in a layer compute the same function.",
"Both of these cases lead to singularities in the Hessian matrix, and this work includes a number of experiments showing the effect of skip connections on the Hessian during training.",
"This is a significant and timely topic.",
"While I may not be the best one to judge the originality of this work, I appreciated how the authors presented clear and concise arguments with experiments to back up their claims."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 3,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 0,
"total": 4
} | 0 | 0 | 0.75 | 0.75 | 0.5 | 0 | 0.25 | 0 | 4 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 2.25 | 0.971623 | 1.278377 |
Interpreting Deep Classification Models With Bayesian Inference | In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CI- FAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer. | The paper develops a technique to understand what nodes in a neural network are important for prediction. The approach they develop consists of using an Indian Buffet Process to model a binary activation matrix with number of rows equal to the number of examples. The binary variables are estimated by taking a relaxed version of the asymptotic MAP objective for this problem. One question from the use of the Indian Buffet Process: how do the asymptotics of the feature allocation determine the number of hidden units selected? Overall, the results didn*t warrant the complexity of the method. The results are neat, but I couldn*t tell why this approach was better than others. Lastly, can you intuitively explain the additivity assumption in the distribution for p(y*) | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nInterpreting Deep Classification Models With Bayesian Inference\nIn this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CI- FAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper develops a technique to understand what nodes in a neural network are important for prediction. The approach they develop consists of using an Indian Buffet Process to model a binary activation matrix with number of rows equal to the number of examples. The binary variables are estimated by taking a relaxed version of the asymptotic MAP objective for this problem. One question from the use of the Indian Buffet Process: how do the asymptotics of the feature allocation determine the number of hidden units selected? Overall, the results didn*t warrant the complexity of the method. The results are neat, but I couldn*t tell why this approach was better than others. Lastly, can you intuitively explain the additivity assumption in the distribution for p(y*)",
"role": "assistant"
}
]
| [
" The paper develops a technique to understand what nodes in a neural network are important for prediction.",
"The approach they develop consists of using an Indian Buffet Process to model a binary activation matrix with number of rows equal to the number of examples.",
"The binary variables are estimated by taking a relaxed version of the asymptotic MAP objective for this problem.",
"One question from the use of the Indian Buffet Process: how do the asymptotics of the feature allocation determine the number of hidden units selected?",
"Overall, the results didn*t warrant the complexity of the method.",
"The results are neat, but I couldn*t tell why this approach was better than others.",
"Lastly, can you intuitively explain the additivity assumption in the distribution for p(y*)"
] | {
"criticism": 2,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 3,
"suggestion_and_solution": 0,
"total": 7
} | 0.285714 | 0 | 0.142857 | 1 | 0.142857 | 0.142857 | 0.428571 | 0 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 2.142857 | 1.57469 | 0.568167 |
Regularizing and Optimizing LSTM Language Models | In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https://github.com/salesforce/awd-lstm-lm. | The paper sets a new state of the art on word level language modelling on the Penn Treebank and Wikitext-2 datasets using various optimization and regularization techniques. These already very good results are further improved, by a large margin, using a Neural Cache. The paper is well written, easy to follow and the results speak for themselves. One possible criticism is that the experimental methodology does not allow for reliable conclusions to be drawn about contributions of all different techniques, because they seem to have been evaluated at a single hyperparameter setting (that was hand tuned for the full model?). A variant on the Averaged SGD method is proposed. This so called NT-ASGD optimizer switches to averaging mode based on recent validation losses. I would have liked to see a more thorough assessment of NT-ASGD, especially against well tuned SGD. I particularly liked Figure 3 which shows how the Neural Cache makes the model much better at handling rare words and UNK (!) at the expense of very common words. Speaking of the Neural Cache, a natural baseline would have been dynamic evaluation. All in all, the paper is a solid contribution which deserves to be accepted. It could become even better, were the experiments to tease the various factors apart. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRegularizing and Optimizing LSTM Language Models\nIn this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization. Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user. Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2. We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart. The code for reproducing the results is open sourced and is available at https://github.com/salesforce/awd-lstm-lm.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper sets a new state of the art on word level language modelling on the Penn Treebank and Wikitext-2 datasets using various optimization and regularization techniques. These already very good results are further improved, by a large margin, using a Neural Cache. The paper is well written, easy to follow and the results speak for themselves. One possible criticism is that the experimental methodology does not allow for reliable conclusions to be drawn about contributions of all different techniques, because they seem to have been evaluated at a single hyperparameter setting (that was hand tuned for the full model?). A variant on the Averaged SGD method is proposed. This so called NT-ASGD optimizer switches to averaging mode based on recent validation losses. I would have liked to see a more thorough assessment of NT-ASGD, especially against well tuned SGD. I particularly liked Figure 3 which shows how the Neural Cache makes the model much better at handling rare words and UNK (!) at the expense of very common words. Speaking of the Neural Cache, a natural baseline would have been dynamic evaluation. All in all, the paper is a solid contribution which deserves to be accepted. It could become even better, were the experiments to tease the various factors apart.",
"role": "assistant"
}
]
| [
" The paper sets a new state of the art on word level language modelling on the Penn Treebank and Wikitext-2 datasets using various optimization and regularization techniques.",
"These already very good results are further improved, by a large margin, using a Neural Cache.",
"The paper is well written, easy to follow and the results speak for themselves.",
"One possible criticism is that the experimental methodology does not allow for reliable conclusions to be drawn about contributions of all different techniques, because they seem to have been evaluated at a single hyperparameter setting (that was hand tuned for the full model?).",
"A variant on the Averaged SGD method is proposed.",
"This so called NT-ASGD optimizer switches to averaging mode based on recent validation losses.",
"I would have liked to see a more thorough assessment of NT-ASGD, especially against well tuned SGD.",
"I particularly liked Figure 3 which shows how the Neural Cache makes the model much better at handling rare words and UNK (!)",
"at the expense of very common words.",
"Speaking of the Neural Cache, a natural baseline would have been dynamic evaluation.",
"All in all, the paper is a solid contribution which deserves to be accepted.",
"It could become even better, were the experiments to tease the various factors apart."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 2,
"materials_and_methods": 8,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 12
} | 0.083333 | 0.083333 | 0.166667 | 0.666667 | 0.333333 | 0.333333 | 0.25 | 0.25 | 12 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 2.150884 | 0.015782 |
An information-theoretic analysis of deep latent-variable models | We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family. | - I think that VAEs are rather forced to be interpreted from an information theoretic point of view for the sake of it, rather than for the sake of a clear and unequivocal contribution from the perspective of VAEs and latent-variable models themselves. How is that useful for a VAE? - "The left vertical line corresponds to the zero rate setting. ...": All these limits are again from an information theoretic point of view and no formulation nor demonstration is provided on how this can actually be as useful. As mentioned earlier in the paper, there are well-known problems with taking this information theory perspective, e.g. difficulties in estimating MI values, etc. - Breaking (some of) the long sentences and paragraphs in page 3 with an unequivocal mathematical formulation would smooth the flow a bit. - "(2) an upper bound that measures the rate, or how costly it is to transmit information about the latent variable.": I am not entirely sure about this one and why it is massively important to be compromised against the obviously big first term. - Toy Model experiment: I do not see any indication of how this is not just a lucky catch and that VAEs consistently suffer from a problem leading to such effect. - Section 5: "can shed light on many different models and objectives that have been proposed in the literature... ": Again the contribution aspect is not so clear through the word "shed light". Minor - Although apparently VAEs represent the main and most influential latent-variable model example, I think switching too much between citing them as VAEs and then as latent-variable models in general was a bit confusing. I propose mentioning in the beginning (as happened) that VAEs are the seminal example of latent-variable models and then going on from this point onwards with VAEs without too much alternation between latent-variable models and VAEs. - page 8: "as show" | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nAn information-theoretic analysis of deep latent-variable models\nWe present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communication cost (rate). We derive the optimal frontier of generative models in the two-dimensional rate-distortion plane, and show how the standard evidence lower bound objective is insufficient to select between points along this frontier. However, by performing targeted optimization to learn generative models with different rates, we are able to learn many models that can achieve similar generative performance but make vastly different trade-offs in terms of the usage of the latent variable. Through experiments on MNIST and Omniglot with a variety of architectures, we show how our framework sheds light on many recent proposed extensions to the variational autoencoder family.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n - I think that VAEs are rather forced to be interpreted from an information theoretic point of view for the sake of it, rather than for the sake of a clear and unequivocal contribution from the perspective of VAEs and latent-variable models themselves. How is that useful for a VAE? - \"The left vertical line corresponds to the zero rate setting. ...\": All these limits are again from an information theoretic point of view and no formulation nor demonstration is provided on how this can actually be as useful. As mentioned earlier in the paper, there are well-known problems with taking this information theory perspective, e.g. difficulties in estimating MI values, etc. - Breaking (some of) the long sentences and paragraphs in page 3 with an unequivocal mathematical formulation would smooth the flow a bit. - \"(2) an upper bound that measures the rate, or how costly it is to transmit information about the latent variable.\": I am not entirely sure about this one and why it is massively important to be compromised against the obviously big first term. - Toy Model experiment: I do not see any indication of how this is not just a lucky catch and that VAEs consistently suffer from a problem leading to such effect. - Section 5: \"can shed light on many different models and objectives that have been proposed in the literature... \": Again the contribution aspect is not so clear through the word \"shed light\". Minor - Although apparently VAEs represent the main and most influential latent-variable model example, I think switching too much between citing them as VAEs and then as latent-variable models in general was a bit confusing. I propose mentioning in the beginning (as happened) that VAEs are the seminal example of latent-variable models and then going on from this point onwards with VAEs without too much alternation between latent-variable models and VAEs. - page 8: \"as show\"",
"role": "assistant"
}
]
| [
" - I think that VAEs are rather forced to be interpreted from an information theoretic point of view for the sake of it, rather than for the sake of a clear and unequivocal contribution from the perspective of VAEs and latent-variable models themselves.",
"How is that useful for a VAE?",
"- \"The left vertical line corresponds to the zero rate setting.",
"...\": All these limits are again from an information theoretic point of view and no formulation nor demonstration is provided on how this can actually be as useful.",
"As mentioned earlier in the paper, there are well-known problems with taking this information theory perspective, e.g.",
"difficulties in estimating MI values, etc.",
"- Breaking (some of) the long sentences and paragraphs in page 3 with an unequivocal mathematical formulation would smooth the flow a bit.",
"- \"(2) an upper bound that measures the rate, or how costly it is to transmit information about the latent variable.",
"\": I am not entirely sure about this one and why it is massively important to be compromised against the obviously big first term.",
"- Toy Model experiment: I do not see any indication of how this is not just a lucky catch and that VAEs consistently suffer from a problem leading to such effect.",
"- Section 5: \"can shed light on many different models and objectives that have been proposed in the literature... \": Again the contribution aspect is not so clear through the word \"shed light\".",
"Minor - Although apparently VAEs represent the main and most influential latent-variable model example, I think switching too much between citing them as VAEs and then as latent-variable models in general was a bit confusing.",
"I propose mentioning in the beginning (as happened) that VAEs are the seminal example of latent-variable models and then going on from this point onwards with VAEs without too much alternation between latent-variable models and VAEs.",
"- page 8: \"as show\""
] | {
"criticism": 8,
"example": 4,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 0,
"presentation_and_reporting": 5,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 14
} | 0.571429 | 0.285714 | 0.285714 | 0.5 | 0 | 0.357143 | 0.142857 | 0.142857 | 14 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 2.285714 | 2.269932 | 0.015782 |
Incremental Learning through Deep Adaptation | Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior. | This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed. Pros The proposed approach is novel and broadly applicable. By definition it maintains the exact performance on the original task, and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network). The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one. A number of different fine-tuning regimes are explored. The paper is mostly clear and well-written (though with a few typos that should be fixed). Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary -- after all, convolutional layers are a generalization of fully-connected layers. (This is hinted at by the mention of fully convolutional networks.) The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights. A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers, but I understand this would be a rather large experimental burden. When discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results. Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning? It’s not immediately clear to me whether this is the case -- it could help to clarify this in the text. If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix, which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation. (Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W.crossF) rather than just W.crossF.) The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting. It would be interesting to see how this compares with the hard thresholding method used here. (As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.) Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nIncremental Learning through Deep Adaptation\nGiven an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added task, typically as many as the original network. We propose a method called Deep Adaptation Networks (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs preserve performance on the original task, require a fraction (typically 13%) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed. Pros The proposed approach is novel and broadly applicable. By definition it maintains the exact performance on the original task, and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network). The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one. A number of different fine-tuning regimes are explored. The paper is mostly clear and well-written (though with a few typos that should be fixed). Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary -- after all, convolutional layers are a generalization of fully-connected layers. (This is hinted at by the mention of fully convolutional networks.) The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights. A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers, but I understand this would be a rather large experimental burden. When discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results. Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning? It’s not immediately clear to me whether this is the case -- it could help to clarify this in the text. If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix, which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation. (Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W.crossF) rather than just W.crossF.) The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting. It would be interesting to see how this compares with the hard thresholding method used here. (As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.) Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly.",
"role": "assistant"
}
]
| [
" This paper proposes to adapt convnet representations to new tasks while avoiding catastrophic forgetting by learning a per-task “controller” specifying weightings of the convolution-al filters throughout the network while keeping the filters themselves fixed.",
"Pros The proposed approach is novel and broadly applicable.",
"By definition it maintains the exact performance on the original task, and enables the network to transfer to new tasks using a controller with a small number of parameters (asymptotically smaller than that of the base network).",
"The method is tested on a number of datasets (each used as source and target) and shows good transfer learning performance on each one.",
"A number of different fine-tuning regimes are explored.",
"The paper is mostly clear and well-written (though with a few typos that should be fixed).",
"Cons/Questions/Suggestions The distinction between the convolutional and fully-connected layers (called “classifiers”) in the approach description (sec 3) is somewhat arbitrary -- after all, convolutional layers are a generalization of fully-connected layers.",
"(This is hinted at by the mention of fully convolutional networks.)",
"The method could just as easily be applied to learn a task-specific rotation of the fully-connected layer weights.",
"A more systematic set of experiments could compare learning the proposed weightings on the first K layers of the network (for K={0, 1, …, N}) and learning independent weights for the latter N-K layers, but I understand this would be a rather large experimental burden.",
"When discussing the controller initialization (sec 4.3), it’s stated that the diagonal init works the best, and that this means one only needs to learn the diagonals to get the best results.",
"Is this implying that the gradients wrt off-diagonal entries of the controller weight matrix are 0 under the diagonal initialization, hence the off-diagonal entries remain zero after learning?",
"It’s not immediately clear to me whether this is the case -- it could help to clarify this in the text.",
"If the off-diag gradients are indeed 0 under the diag init, it could also make sense to experiment with an “identity+noise” initialization of the controller matrix, which might give the best of both worlds in terms of flexibility and inductive bias to maintain the original representation.",
"(Equivalently, one could treat the controller-weighted filters as a “residual” term on the original filters F with the controller weights W initialized to noise, with the final filters being F+(W.crossF) rather than just W.crossF.)",
"The dataset classifier (sec 4.3.4) could be learnt end-to-end by using a softmax output of the dataset classifier as the alpha weighting.",
"It would be interesting to see how this compares with the hard thresholding method used here.",
"(As an intermediate step, the performance could also be measured with the dataset classifier trained in the same way but used as a soft weighting, rather than the hard version rounding alpha to 0 or 1.)",
"Overall, the paper is clear and the proposed method is sensible, novel, and evaluated reasonably thoroughly."
] | {
"criticism": 2,
"example": 3,
"importance_and_relevance": 2,
"materials_and_methods": 15,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 6,
"suggestion_and_solution": 8,
"total": 19
} | 0.105263 | 0.157895 | 0.105263 | 0.789474 | 0.210526 | 0.210526 | 0.315789 | 0.421053 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.315789 | 1.747622 | 0.568167 |
Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation | The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation. | In this paper, the authors proposed a framework to classify cells and implement cell segmentation based on the deep learning techniques. Using the classification results to guide the feature selection method, the method can achieve comparable performance even 90% of the input features are reduced. In general, the paper addresses an interesting problem, but the technical contribution is rather incremental. The authors seem to apply some well-define methods to realize a new task. The authors are expected to clarify their technical contributions or model improvement to address the specific problem. Moreover, there also exist some recent progress on image segmentation, such as FCN or mask R-CNN. The authors are expected to demonstrate the results by improving these advanced models. In general, this is an interesting paper, but would be more fit to MICCAI or ISBI. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nData-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation\nThe high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors proposed a framework to classify cells and implement cell segmentation based on the deep learning techniques. Using the classification results to guide the feature selection method, the method can achieve comparable performance even 90% of the input features are reduced. In general, the paper addresses an interesting problem, but the technical contribution is rather incremental. The authors seem to apply some well-define methods to realize a new task. The authors are expected to clarify their technical contributions or model improvement to address the specific problem. Moreover, there also exist some recent progress on image segmentation, such as FCN or mask R-CNN. The authors are expected to demonstrate the results by improving these advanced models. In general, this is an interesting paper, but would be more fit to MICCAI or ISBI.",
"role": "assistant"
}
]
| [
" In this paper, the authors proposed a framework to classify cells and implement cell segmentation based on the deep learning techniques.",
"Using the classification results to guide the feature selection method, the method can achieve comparable performance even 90% of the input features are reduced.",
"In general, the paper addresses an interesting problem, but the technical contribution is rather incremental.",
"The authors seem to apply some well-define methods to realize a new task.",
"The authors are expected to clarify their technical contributions or model improvement to address the specific problem.",
"Moreover, there also exist some recent progress on image segmentation, such as FCN or mask R-CNN.",
"The authors are expected to demonstrate the results by improving these advanced models.",
"In general, this is an interesting paper, but would be more fit to MICCAI or ISBI."
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 4,
"presentation_and_reporting": 0,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 8
} | 0.125 | 0 | 0.375 | 0.625 | 0.5 | 0 | 0.25 | 0.25 | 8 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation | The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation. | Authors propose a greedy scheme to select a subset of (highly correlated) spectral features in a classification task. The selection criterion used is the average magnitude with which this feature contributes to the activation of a next-layer perceptron. Once validation accuracy drops too much, the pruned network is retrained, etc. Pro: - Method works well on a single data set and solves the problem - Paper is clearly written - Good use of standard tricks Con: - Little novelty This paper could be a good fit for an applied conference such as the International Symposium on Biomedical Imaging. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nData-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation\nThe high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Authors propose a greedy scheme to select a subset of (highly correlated) spectral features in a classification task. The selection criterion used is the average magnitude with which this feature contributes to the activation of a next-layer perceptron. Once validation accuracy drops too much, the pruned network is retrained, etc. Pro: - Method works well on a single data set and solves the problem - Paper is clearly written - Good use of standard tricks Con: - Little novelty This paper could be a good fit for an applied conference such as the International Symposium on Biomedical Imaging.",
"role": "assistant"
}
]
| [
" Authors propose a greedy scheme to select a subset of (highly correlated) spectral features in a classification task.",
"The selection criterion used is the average magnitude with which this feature contributes to the activation of a next-layer perceptron.",
"Once validation accuracy drops too much, the pruned network is retrained, etc.",
"Pro: - Method works well on a single data set and solves the problem - Paper is clearly written - Good use of standard tricks Con: - Little novelty This paper could be a good fit for an applied conference such as the International Symposium on Biomedical Imaging."
] | {
"criticism": 0,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 4
} | 0 | 0 | 0.25 | 1 | 0.25 | 0.25 | 0.25 | 0.25 | 4 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.25 | 0.971623 | 1.278377 |
Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks | Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks. | This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance. It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases. It is disappointing, because this paper is named as "finding optimal representation spaces ...". In addition, one of the contributions claimed in this paper is about introducing the "formalism" of an optimal representation space. However, I didn*t see any formal definition of this concept or theoretical justification. About FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5 --- I think this simple fact is missed in this paper. The experimental results are not convincing, because I didn*t find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space. There are statements in this paper that I didn*t agree with 1) Distributional hypothesis from Harris (1954) is about words not sentences. 2) Not sure the following line makes sense: "However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar" | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDecoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks\nExperimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance. It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases. It is disappointing, because this paper is named as \"finding optimal representation spaces ...\". In addition, one of the contributions claimed in this paper is about introducing the \"formalism\" of an optimal representation space. However, I didn*t see any formal definition of this concept or theoretical justification. About FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5 --- I think this simple fact is missed in this paper. The experimental results are not convincing, because I didn*t find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space. There are statements in this paper that I didn*t agree with 1) Distributional hypothesis from Harris (1954) is about words not sentences. 2) Not sure the following line makes sense: \"However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar\"",
"role": "assistant"
}
]
| [
" This paper proposes the concept of optimal representation space and suggests that a model should be evaluated in its optimal representation space to get good performance.",
"It could be a good idea if this paper could suggest some ways to find the optimal representation space in general, instead of just showing two cases.",
"It is disappointing, because this paper is named as \"finding optimal representation spaces ...\".",
"In addition, one of the contributions claimed in this paper is about introducing the \"formalism\" of an optimal representation space.",
"However, I didn*t see any formal definition of this concept or theoretical justification.",
"About FastSent or any other log-linear model, the reason that dot product (or cosine similarity) is a good metric is because the model is trained to optimize the dot product, as shown in equation 5 --- I think this simple fact is missed in this paper.",
"The experimental results are not convincing, because I didn*t find any consistent pattern that shows the performance is getting better once we evaluated the model in its optimal representation space.",
"There are statements in this paper that I didn*t agree with 1) Distributional hypothesis from Harris (1954) is about words not sentences.",
"2) Not sure the following line makes sense: \"However, these unsupervised tasks are more interesting from a general AI point of view, as they test whether the machine truly understands the human notion of similarity, without being explicitly told what is similar\""
] | {
"criticism": 6,
"example": 1,
"importance_and_relevance": 3,
"materials_and_methods": 4,
"praise": 3,
"presentation_and_reporting": 2,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 9
} | 0.666667 | 0.111111 | 0.333333 | 0.444444 | 0.333333 | 0.222222 | 0.444444 | 0.111111 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 2.414148 | 0.252519 |
Complex- and Real-Valued Neural Network Architectures | Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of results. Existing literature ignores the number of parameters used. We compared complex- and real-valued neural networks using five activation functions. We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks. However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks. Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers. | This work re-evaluates complex-valued neural networks: complex weights, complex activation functions. The paper acknowledges that complex networks are not new, and that the findings of previous authors is that complex networks perform less well than real-valued alternatives. The paper reports a comparison of real-valued and complex-valued neural networks, controlling for storage capacity (with an interesting discussion of controlling for capacity in terms of computational inference). The paper concludes with "Overall the complex-valued neural networks do not perform as well as expected." I didn*t understand this conclusion, because previous work found complex-valued neural networks to be inferior, which is consistent with the results reported here. I did not see support in this paper for the claim in the abstract that special architectures make complex networks work better, or that they are well suited to particular data sets. The empirical results are only presented in table-of-numbers format (graphical comparisons would be easier to understand), and tables 5-8 are all zero, which doesn*t make sense for these classification tasks. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nComplex- and Real-Valued Neural Network Architectures\nComplex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of results. Existing literature ignores the number of parameters used. We compared complex- and real-valued neural networks using five activation functions. We found that when real and complex neural networks are compared using simple classification tasks, complex neural networks perform equal to or slightly worse than real-value neural networks. However, when specialised architecture is used, complex-valued neural networks outperform real-valued neural networks. Therefore, complex–valued neural networks should be used when the input data is also complex or it can be meaningfully to the complex plane, or when the network architecture uses the structure defined by using complex numbers.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This work re-evaluates complex-valued neural networks: complex weights, complex activation functions. The paper acknowledges that complex networks are not new, and that the findings of previous authors is that complex networks perform less well than real-valued alternatives. The paper reports a comparison of real-valued and complex-valued neural networks, controlling for storage capacity (with an interesting discussion of controlling for capacity in terms of computational inference). The paper concludes with \"Overall the complex-valued neural networks do not perform as well as expected.\" I didn*t understand this conclusion, because previous work found complex-valued neural networks to be inferior, which is consistent with the results reported here. I did not see support in this paper for the claim in the abstract that special architectures make complex networks work better, or that they are well suited to particular data sets. The empirical results are only presented in table-of-numbers format (graphical comparisons would be easier to understand), and tables 5-8 are all zero, which doesn*t make sense for these classification tasks.",
"role": "assistant"
}
]
| [
" This work re-evaluates complex-valued neural networks: complex weights, complex activation functions.",
"The paper acknowledges that complex networks are not new, and that the findings of previous authors is that complex networks perform less well than real-valued alternatives.",
"The paper reports a comparison of real-valued and complex-valued neural networks, controlling for storage capacity (with an interesting discussion of controlling for capacity in terms of computational inference).",
"The paper concludes with \"Overall the complex-valued neural networks do not perform as well as expected.\"",
"I didn*t understand this conclusion, because previous work found complex-valued neural networks to be inferior, which is consistent with the results reported here.",
"I did not see support in this paper for the claim in the abstract that special architectures make complex networks work better, or that they are well suited to particular data sets.",
"The empirical results are only presented in table-of-numbers format (graphical comparisons would be easier to understand), and tables 5-8 are all zero, which doesn*t make sense for these classification tasks."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 7,
"praise": 1,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 1,
"total": 7
} | 0.714286 | 0 | 0.285714 | 1 | 0.142857 | 0.142857 | 0.571429 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 2.431833 | 0.568167 |
Global Optimality Conditions for Deep Neural Networks | We study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting. | -I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e. changing the title to … deep LINEAR networks - Theorems 2.1, 2.2 and the observation (2) are nice! - Theorem 2.2 there is no discussion about the nature of the saddle point is it strict? Does this theorem imply that the global optima can be reached from a random initialization? Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary. - I’m a bit puzzled by Theorems 4.1 and 4.2 and why they are useful. Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result? Further discussion would be helpful. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGlobal Optimality Conditions for Deep Neural Networks\nWe study the error landscape of deep linear and nonlinear neural networks with the squared error loss. Minimizing the loss of a deep linear neural network is a nonconvex problem, and despite recent progress, our understanding of this loss surface is still incomplete. For deep linear networks, we present necessary and sufficient conditions for a critical point of the risk function to be a global minimum. Surprisingly, our conditions provide an efficiently checkable test for global optimality, while such tests are typically intractable in nonconvex optimization. We further extend these results to deep nonlinear neural networks and prove similar sufficient conditions for global optimality, albeit in a more limited function space setting.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n -I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e. changing the title to … deep LINEAR networks - Theorems 2.1, 2.2 and the observation (2) are nice! - Theorem 2.2 there is no discussion about the nature of the saddle point is it strict? Does this theorem imply that the global optima can be reached from a random initialization? Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary. - I’m a bit puzzled by Theorems 4.1 and 4.2 and why they are useful. Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result? Further discussion would be helpful.",
"role": "assistant"
}
]
| [
" -I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e.",
"changing the title to … deep LINEAR networks - Theorems 2.1, 2.2 and the observation (2) are nice!",
"- Theorem 2.2 there is no discussion about the nature of the saddle point is it strict?",
"Does this theorem imply that the global optima can be reached from a random initialization?",
"Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary.",
"- I’m a bit puzzled by Theorems 4.1 and 4.2 and why they are useful.",
"Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result?",
"Further discussion would be helpful."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 1,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 5,
"suggestion_and_solution": 3,
"total": 8
} | 0.375 | 0 | 0.25 | 0.125 | 0.25 | 0.125 | 0.625 | 0.375 | 8 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Discrete Autoencoders for Sequence Models | Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations. | The authors describe a method for encoding text into a discrete representation / latent space. On a measure that they propose, they should it outperforms an alternative Gumbel-Softmax method for both language modeling and NMT. The proposed method seems effective, and the proposed DSAE metric is nice, though it’s surprising if previous papers have not used metrics similar to normalized reduction in log-ppl. The datasets considered in the experiments are also large, another plus. However, overall, the paper is difficult to read and parse, especially since low-level details are weaved together with higher-level points throughout, and are often not motivated. The major critique would be the qualitative nature of results in the sections on “Decipering the latent code” and (to a lesser extent) “Mixed sample-beam decoding.” These two sections are simply too anecdotal, although it is nice being stepped through the reasoning for the single example considered in Section 3.3. Some quantitative or aggregate results are needed, and it should at least be straightforward to do so using human evaluation for a subset of examples for diverse decoding. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiscrete Autoencoders for Sequence Models\nRecurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors describe a method for encoding text into a discrete representation / latent space. On a measure that they propose, they should it outperforms an alternative Gumbel-Softmax method for both language modeling and NMT. The proposed method seems effective, and the proposed DSAE metric is nice, though it’s surprising if previous papers have not used metrics similar to normalized reduction in log-ppl. The datasets considered in the experiments are also large, another plus. However, overall, the paper is difficult to read and parse, especially since low-level details are weaved together with higher-level points throughout, and are often not motivated. The major critique would be the qualitative nature of results in the sections on “Decipering the latent code” and (to a lesser extent) “Mixed sample-beam decoding.” These two sections are simply too anecdotal, although it is nice being stepped through the reasoning for the single example considered in Section 3.3. Some quantitative or aggregate results are needed, and it should at least be straightforward to do so using human evaluation for a subset of examples for diverse decoding.",
"role": "assistant"
}
]
| [
" The authors describe a method for encoding text into a discrete representation / latent space.",
"On a measure that they propose, they should it outperforms an alternative Gumbel-Softmax method for both language modeling and NMT.",
"The proposed method seems effective, and the proposed DSAE metric is nice, though it’s surprising if previous papers have not used metrics similar to normalized reduction in log-ppl.",
"The datasets considered in the experiments are also large, another plus.",
"However, overall, the paper is difficult to read and parse, especially since low-level details are weaved together with higher-level points throughout, and are often not motivated.",
"The major critique would be the qualitative nature of results in the sections on “Decipering the latent code” and (to a lesser extent) “Mixed sample-beam decoding.” These two sections are simply too anecdotal, although it is nice being stepped through the reasoning for the single example considered in Section 3.3.",
"Some quantitative or aggregate results are needed, and it should at least be straightforward to do so using human evaluation for a subset of examples for diverse decoding."
] | {
"criticism": 2,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 2,
"total": 7
} | 0.285714 | 0.142857 | 0 | 0.857143 | 0.142857 | 0.285714 | 0.285714 | 0.285714 | 7 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.285714 | 1.717547 | 0.568167 |
Discrete Autoencoders for Sequence Models | Recurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations. | The topic is interesting however the description in the paper is lacking clarity. The paper is written in a procedural fashion - I first did that, then I did that and after that I did third. Having proper mathematical description and good diagrams of what you doing would have immensely helped. Another big issue is the lack of proper validation in Section 3.4. Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis. Given that I see only one example I can not objectively know if your model produces examples like that *each* time so having just one example is as good as having none. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDiscrete Autoencoders for Sequence Models\nRecurrent models for sequences have been recently successful at many tasks, especially for language modeling and machine translation. Nevertheless, it remains challenging to extract good representations from these models. For instance, even though language has a clear hierarchical structure going from characters through words to sentences, it is not apparent in current language models. We propose to improve the representation in sequence models by augmenting current approaches with an autoencoder that is forced to compress the sequence through an intermediate discrete latent space. In order to propagate gradients though this discrete representation we introduce an improved semantic hashing technique. We show that this technique performs well on a newly proposed quantitative efficiency measure. We also analyze latent codes produced by the model showing how they correspond to words and phrases. Finally, we present an application of the autoencoder-augmented model to generating diverse translations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The topic is interesting however the description in the paper is lacking clarity. The paper is written in a procedural fashion - I first did that, then I did that and after that I did third. Having proper mathematical description and good diagrams of what you doing would have immensely helped. Another big issue is the lack of proper validation in Section 3.4. Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis. Given that I see only one example I can not objectively know if your model produces examples like that *each* time so having just one example is as good as having none.",
"role": "assistant"
}
]
| [
" The topic is interesting however the description in the paper is lacking clarity.",
"The paper is written in a procedural fashion - I first did that, then I did that and after that I did third.",
"Having proper mathematical description and good diagrams of what you doing would have immensely helped.",
"Another big issue is the lack of proper validation in Section 3.4.",
"Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis.",
"Given that I see only one example I can not objectively know if your model produces examples like that *each* time so having just one example is as good as having none."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 4,
"praise": 2,
"presentation_and_reporting": 3,
"results_and_discussion": 0,
"suggestion_and_solution": 0,
"total": 6
} | 0.5 | 0 | 0.166667 | 0.666667 | 0.333333 | 0.5 | 0 | 0 | 6 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 2.166667 | 1.393328 | 0.773339 |
Predict Responsibly: Increasing Fairness by Learning to Defer | When machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly. To fulfill these three requirements, a model must be able to output a reject option (i.e. say "``I Don*t Know") when it is not qualified to make a prediction. In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user. We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives. We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline. | The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model). The novel classifier is the result of the optimisation of a loss function (composed of a part similar to a logistic regression model and a part being the disparate impact). Hence, it can be interpreted as a logistic loss with a fairness regularisation. The results are promising and the applications are very important for the acceptance of ML approaches in the society.
However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied. Finally, this paper used uncommon vocabulary (for the machine learning community) and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity). When reading the submitted paper, it was unclear (until section 6) how deferring could help fairness. Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss). To conclude, although the application is of high interest and the numerical results encouraging, the methodological approach does not seem to be very novel. Minor comment : - The list of authors of the reference “Machine bias : theres software…” apperars incorrectly (some comma may be missing in the .bib file) and there is a small typo in the title. Possible extensions : - The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) - It could also be generalised to a mixture of biased classifier (more than on DM). Edited : As noted by a fellow reviewer, the paper is a bit out of the scope of ICLR and may be more in line with other ML conferences. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPredict Responsibly: Increasing Fairness by Learning to Defer\nWhen machine learning models are used for high-stakes decisions, they should predict accurately, fairly, and responsibly. To fulfill these three requirements, a model must be able to output a reject option (i.e. say \"``I Don*t Know\") when it is not qualified to make a prediction. In this work, we propose learning to defer, a method by which a model can defer judgment to a downstream decision-maker such as a human user. We show that learning to defer generalizes the rejection learning framework in two ways: by considering the effect of other agents in the decision-making process, and by allowing for optimization of complex objectives. We propose a learning algorithm which accounts for potential biases held by decision-makerslater in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model). The novel classifier is the result of the optimisation of a loss function (composed of a part similar to a logistic regression model and a part being the disparate impact). Hence, it can be interpreted as a logistic loss with a fairness regularisation. The results are promising and the applications are very important for the acceptance of ML approaches in the society.
However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied. Finally, this paper used uncommon vocabulary (for the machine learning community) and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity). When reading the submitted paper, it was unclear (until section 6) how deferring could help fairness. Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss). To conclude, although the application is of high interest and the numerical results encouraging, the methodological approach does not seem to be very novel. Minor comment : - The list of authors of the reference “Machine bias : theres software…” apperars incorrectly (some comma may be missing in the .bib file) and there is a small typo in the title. Possible extensions : - The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) - It could also be generalised to a mixture of biased classifier (more than on DM). Edited : As noted by a fellow reviewer, the paper is a bit out of the scope of ICLR and may be more in line with other ML conferences.",
"role": "assistant"
}
]
| [
" The proposed method is a classifier that is fair and works in collaboration with an unfair (but presumably accurate model).",
"The novel classifier is the result of the optimisation of a loss function (composed of a part similar to a logistic regression model and a part being the disparate impact).",
"Hence, it can be interpreted as a logistic loss with a fairness regularisation.",
"The results are promising and the applications are very important for the acceptance of ML approaches in the society.",
"However, I believe that the model could be made more general (than a fairness regularized logistic loss) and its theoretical properties studied.",
"Finally, this paper used uncommon vocabulary (for the machine learning community) and it make is difficult to follow sometimes (for example, the use of a Decision-Maker entity).",
"When reading the submitted paper, it was unclear (until section 6) how deferring could help fairness.",
"Hence, the structure of the paper could maybe be improved by introducing the cost function earlier in the manuscript (as a fairness regularised loss).",
"To conclude, although the application is of high interest and the numerical results encouraging, the methodological approach does not seem to be very novel.",
"Minor comment : - The list of authors of the reference “Machine bias : theres software…” apperars incorrectly (some comma may be missing in the .bib file) and there is a small typo in the title.",
"Possible extensions : - The proposed fairness aware loss could be made more general (and not only in the case of a logistic model) - It could also be generalised to a mixture of biased classifier (more than on DM).",
"Edited : As noted by a fellow reviewer, the paper is a bit out of the scope of ICLR and may be more in line with other ML conferences."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 3,
"presentation_and_reporting": 3,
"results_and_discussion": 4,
"suggestion_and_solution": 5,
"total": 12
} | 0.416667 | 0 | 0.166667 | 0.5 | 0.25 | 0.25 | 0.333333 | 0.416667 | 12 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 2.317551 | 0.015782 |
Robustness of Classifiers to Universal Perturbations: A Geometric Perspective | Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties. | The paper is written well and clear. The core contribution of the paper is the illustration that: under the assumption of flat, or curved decision boundaries with positive curvature small universal adversarial perturbations exist. Pros: the intuition and geometry is rather clearly presented. Cons: References to "CaffeNet" and "LeNet" (even though the latter is well-known) are missing. In the experimental section used to validate the main hypothesis that the deep networks have positive curvature decision boundaries, there is no description of how these networks were trained. It is not clear why the authors have decided to use out-dated 5-layer "LeNet" and NiN (Network in network) architectures instead of more recent and much better performing architectures (and less complex than NiN architectures). It would be nice to see how the behavior and boundaries look in these cases. The conclusion is speculative: "Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function. This will be the subject of future works." It is clear that regularization should play a significant role in shaping the decision boundaries. Unfortunately, the paper does not provide details at the basic level, which algorithms, architectures, hyper-parameters or regularization terms are used. All these factors should play a very significant role in the experimental validation of their hypothesis. Notes: I did not check the proofs of the theorems in detail. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nRobustness of Classifiers to Universal Perturbations: A Geometric Perspective\nDeep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper is written well and clear. The core contribution of the paper is the illustration that: under the assumption of flat, or curved decision boundaries with positive curvature small universal adversarial perturbations exist. Pros: the intuition and geometry is rather clearly presented. Cons: References to \"CaffeNet\" and \"LeNet\" (even though the latter is well-known) are missing. In the experimental section used to validate the main hypothesis that the deep networks have positive curvature decision boundaries, there is no description of how these networks were trained. It is not clear why the authors have decided to use out-dated 5-layer \"LeNet\" and NiN (Network in network) architectures instead of more recent and much better performing architectures (and less complex than NiN architectures). It would be nice to see how the behavior and boundaries look in these cases. The conclusion is speculative: \"Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function. This will be the subject of future works.\" It is clear that regularization should play a significant role in shaping the decision boundaries. Unfortunately, the paper does not provide details at the basic level, which algorithms, architectures, hyper-parameters or regularization terms are used. All these factors should play a very significant role in the experimental validation of their hypothesis. Notes: I did not check the proofs of the theorems in detail.",
"role": "assistant"
}
]
| [
" The paper is written well and clear.",
"The core contribution of the paper is the illustration that: under the assumption of flat, or curved decision boundaries with positive curvature small universal adversarial perturbations exist.",
"Pros: the intuition and geometry is rather clearly presented.",
"Cons: References to \"CaffeNet\" and \"LeNet\" (even though the latter is well-known) are missing.",
"In the experimental section used to validate the main hypothesis that the deep networks have positive curvature decision boundaries, there is no description of how these networks were trained.",
"It is not clear why the authors have decided to use out-dated 5-layer \"LeNet\" and NiN (Network in network) architectures instead of more recent and much better performing architectures (and less complex than NiN architectures).",
"It would be nice to see how the behavior and boundaries look in these cases.",
"The conclusion is speculative: \"Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function.",
"This will be the subject of future works.\"",
"It is clear that regularization should play a significant role in shaping the decision boundaries.",
"Unfortunately, the paper does not provide details at the basic level, which algorithms, architectures, hyper-parameters or regularization terms are used.",
"All these factors should play a very significant role in the experimental validation of their hypothesis.",
"Notes: I did not check the proofs of the theorems in detail."
] | {
"criticism": 7,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 6,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 13
} | 0.538462 | 0.076923 | 0.307692 | 0.461538 | 0.076923 | 0.307692 | 0.307692 | 0.230769 | 13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.307692 | 2.307692 | 0 |
Machine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures | The resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods. | Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; TTNs exhibit same increase level of abstractions as CNN or DBN; Fidelity and entanglement entropy can be considered as some measurements of the network. Authors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, which suits more the 2-D nature of images. In section 2, authors stated that the choice of feature function is arbitrary, and a specific feature map was introduced in Section 4. However, it is not straightforward to connect (10) to (1) or (2). It is better to clarify this connection because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability. For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand. Is it related to specific feature map? How about the complexity of eigen-decomposition for one tensor at each iterates. And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm? More details on these problems are required for readers’ better understanding. From Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion. It is better to clarify these relationships with either clear explanation or better examples. From Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning. However, from Fig 3 only, it is hard to make this conclusion. First, there are not too many differences from Fig 3(a) to Fig 3(e). Second, there is no visualization result reported from deep learning on the same data for comparison. Hence, it is not convincing to draw this conclusion only from Fig 3. In Section 4.2, what strategy is used to obtain these parameters in Table 1? In Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nMachine Learning by Two-Dimensional Hierarchical Tensor Networks: A Quantum Information Theoretic Perspective on Deep Architectures\nThe resemblance between the methods used in studying quantum-many body physics and in machine learning has drawn considerable attention. In particular, tensor networks (TNs) and deep learning architectures bear striking similarities to the extent that TNs can be used for machine learning. Previous results used one-dimensional TNs in image recognition, showing limited scalability and a request of high bond dimension. In this work, we train two-dimensional hierarchical TNs to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz (MERA). This approach overcomes scalability issues and implies novel mathematical connections among quantum many-body physics, quantum information theory, and machine learning. While keeping the TN unitary in the training phase, TN states can be defined, which optimally encodes each class of the images into a quantum many-body state. We study the quantum features of the TN states, including quantum entanglement and fidelity. We suggest these quantities could be novel properties that characterize the image classes, as well as the machine learning tasks. Our work could be further applied to identifying possible quantum properties of certain artificial intelligence methods.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training. Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; TTNs exhibit same increase level of abstractions as CNN or DBN; Fidelity and entanglement entropy can be considered as some measurements of the network. Authors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, which suits more the 2-D nature of images. In section 2, authors stated that the choice of feature function is arbitrary, and a specific feature map was introduced in Section 4. However, it is not straightforward to connect (10) to (1) or (2). It is better to clarify this connection because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability. For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand. Is it related to specific feature map? How about the complexity of eigen-decomposition for one tensor at each iterates. And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm? More details on these problems are required for readers’ better understanding. From Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion. It is better to clarify these relationships with either clear explanation or better examples. From Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning. However, from Fig 3 only, it is hard to make this conclusion. First, there are not too many differences from Fig 3(a) to Fig 3(e). Second, there is no visualization result reported from deep learning on the same data for comparison. Hence, it is not convincing to draw this conclusion only from Fig 3. In Section 4.2, what strategy is used to obtain these parameters in Table 1? In Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy.",
"role": "assistant"
}
]
| [
" Authors of this paper derived an efficient quantum-inspired learning algorithm based on a hierarchical representation that is known as tree tensor network, which is inspired by the multipartite entanglement renormalization ansatz approach where the tensors in the TN are kept to be unitary during training.",
"Some observations are: The limitation of learnability of TTN strongly depends on the physical indexes and the geometrical indexes determine how well the TTNs approximate the limit; TTNs exhibit same increase level of abstractions as CNN or DBN; Fidelity and entanglement entropy can be considered as some measurements of the network.",
"Authors introduced the two-dimensional hierarchical tensor networks for solving image recognition problems, which suits more the 2-D nature of images.",
"In section 2, authors stated that the choice of feature function is arbitrary, and a specific feature map was introduced in Section 4.",
"However, it is not straightforward to connect (10) to (1) or (2).",
"It is better to clarify this connection because some important parameters such as the virtual bond and input bond are related to the complexity of the proposed algorithm as well as the limitation of learnability.",
"For example, the scaling of the complexity O(dN_T(b_v^5 + b_i^4)) is not easy to understand.",
"Is it related to specific feature map?",
"How about the complexity of eigen-decomposition for one tensor at each iterates.",
"And also, whether the tricks used to accelerate the computations will affect the convergence of the algorithm?",
"More details on these problems are required for readers’ better understanding.",
"From Fig 2, it is difficult to see the relationship between learnability and parameters such input bond and virtual bond because it seems there are no clear trends in the Fig 2(a) and (b) to make any conclusion.",
"It is better to clarify these relationships with either clear explanation or better examples.",
"From Fig 3, authors claimed that TN obtained the same levels of abstractions as in deep learning.",
"However, from Fig 3 only, it is hard to make this conclusion.",
"First, there are not too many differences from Fig 3(a) to Fig 3(e).",
"Second, there is no visualization result reported from deep learning on the same data for comparison.",
"Hence, it is not convincing to draw this conclusion only from Fig 3.",
"In Section 4.2, what strategy is used to obtain these parameters in Table 1?",
"In Section 5, it is interesting to see more experiments in terms of fidelity and entanglement entropy."
] | {
"criticism": 5,
"example": 6,
"importance_and_relevance": 1,
"materials_and_methods": 11,
"praise": 0,
"presentation_and_reporting": 5,
"results_and_discussion": 10,
"suggestion_and_solution": 5,
"total": 20
} | 0.25 | 0.3 | 0.05 | 0.55 | 0 | 0.25 | 0.5 | 0.25 | 20 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 2.15 | 1.376661 | 0.773339 |
Large Scale Optimal Transport and Mapping Estimation | This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling. | Quality The theoretical results presented in the paper appear to be correct. However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair. Clarity The paper is mostly clear, even though some parts deserve more discussion/clarification (algorithm, experimental evaluation). Originality The theoretical results are original, and the SGD approach is a priori original as well. Significance The relaxed dual formulation and OT/Monge maps convergence results are interesting and can of of interest for researchers in the area, the other aspects of the paper are limited. Pros: -Theoretical results on the convergence of OT/Monge maps -Regularized formulation compatible with SGD Cons -Experimental evaluation limited -The large scale aspect lacks of thorough analysis -The paper presents 2 contributions but at then end of the day, the development of each of them appears limited Comments: -The weak convergence results are interesting. However, the fact that no convergence rate is given makes the result weak. In particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential. This can be coherent with the problem of Domain Adaptation that can be NP-hard even under the co-variate shift assumption (Ben-David&Urner, ALT2012). Then, I think that the claim of page 6 saying that Domain Adaptation can be performed "nearly optimally" has then to be rephrased. I think that results show that the approach is theoretically justified but optimality is not here yet. Theorem 1 is only valid for entropy-based regularizations, what is the difficulty for having a similar result with L2 regularization? -The experimental evaluation on the running time is limited to one particular problem. If this subject is important, it would have been interesting to compare the approaches on other large scale problems and possibly with other implementations. It is also surprising that the efficiency the L2-regularized version is not evaluated. For a paper interesting in large scale aspects, the experimental evaluation is rather weak. The 2 methods compared in Fig 2 reach the same objective values at convergence, but is there any particular difference in the solutions found? -Algorithm 1 is presented without any discussion about complexity, rate of convergence. Could the authors discuss this aspect? The presentation of this algo is a bit short and could deserve more space (in the supplementary) -For the DA application, the considered datasets are classic but not really "large scale", anyway this is a minor remark. The setup is not completely clear, since the approach is interesting for out of sample data, so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map. This point is not very clear and we do not know how many source instances are used to compute the mapping - the mapping is incomplete on this point while this is an interesting aspect of the paper: this justifies even more the large scale aspect is the algo need less examples during learning to perform similar or even better classification. Hyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test (for all methods), which is not fair since target label information will not be available from a practical standpoint. The authors claim that they did not want to compete with state of the art DA, but the approach of Perrot et al., 2016 seems to a have a similar objective and could be used as a baseline. Experiments on generative optimal transport are interesting and probably generate more discussion/perspectives. -- After rebuttal -- Authors have answered to many of my comments, I think this is an interesting paper, I increase my score. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLarge Scale Optimal Transport and Mapping Estimation\nThis paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n Quality The theoretical results presented in the paper appear to be correct. However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair. Clarity The paper is mostly clear, even though some parts deserve more discussion/clarification (algorithm, experimental evaluation). Originality The theoretical results are original, and the SGD approach is a priori original as well. Significance The relaxed dual formulation and OT/Monge maps convergence results are interesting and can of of interest for researchers in the area, the other aspects of the paper are limited. Pros: -Theoretical results on the convergence of OT/Monge maps -Regularized formulation compatible with SGD Cons -Experimental evaluation limited -The large scale aspect lacks of thorough analysis -The paper presents 2 contributions but at then end of the day, the development of each of them appears limited Comments: -The weak convergence results are interesting. However, the fact that no convergence rate is given makes the result weak. In particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential. This can be coherent with the problem of Domain Adaptation that can be NP-hard even under the co-variate shift assumption (Ben-David&Urner, ALT2012). Then, I think that the claim of page 6 saying that Domain Adaptation can be performed \"nearly optimally\" has then to be rephrased. I think that results show that the approach is theoretically justified but optimality is not here yet. Theorem 1 is only valid for entropy-based regularizations, what is the difficulty for having a similar result with L2 regularization? -The experimental evaluation on the running time is limited to one particular problem. If this subject is important, it would have been interesting to compare the approaches on other large scale problems and possibly with other implementations. It is also surprising that the efficiency the L2-regularized version is not evaluated. For a paper interesting in large scale aspects, the experimental evaluation is rather weak. The 2 methods compared in Fig 2 reach the same objective values at convergence, but is there any particular difference in the solutions found? -Algorithm 1 is presented without any discussion about complexity, rate of convergence. Could the authors discuss this aspect? The presentation of this algo is a bit short and could deserve more space (in the supplementary) -For the DA application, the considered datasets are classic but not really \"large scale\", anyway this is a minor remark. The setup is not completely clear, since the approach is interesting for out of sample data, so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map. This point is not very clear and we do not know how many source instances are used to compute the mapping - the mapping is incomplete on this point while this is an interesting aspect of the paper: this justifies even more the large scale aspect is the algo need less examples during learning to perform similar or even better classification. Hyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test (for all methods), which is not fair since target label information will not be available from a practical standpoint. The authors claim that they did not want to compete with state of the art DA, but the approach of Perrot et al., 2016 seems to a have a similar objective and could be used as a baseline. Experiments on generative optimal transport are interesting and probably generate more discussion/perspectives. -- After rebuttal -- Authors have answered to many of my comments, I think this is an interesting paper, I increase my score.",
"role": "assistant"
}
]
| [
" Quality The theoretical results presented in the paper appear to be correct.",
"However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair.",
"Clarity The paper is mostly clear, even though some parts deserve more discussion/clarification (algorithm, experimental evaluation).",
"Originality The theoretical results are original, and the SGD approach is a priori original as well.",
"Significance The relaxed dual formulation and OT/Monge maps convergence results are interesting and can of of interest for researchers in the area, the other aspects of the paper are limited.",
"Pros: -Theoretical results on the convergence of OT/Monge maps -Regularized formulation compatible with SGD Cons -Experimental evaluation limited -The large scale aspect lacks of thorough analysis -The paper presents 2 contributions but at then end of the day, the development of each of them appears limited Comments: -The weak convergence results are interesting.",
"However, the fact that no convergence rate is given makes the result weak.",
"In particular, it is possible that the number of examples needed for achieving a given approximation is at least exponential.",
"This can be coherent with the problem of Domain Adaptation that can be NP-hard even under the co-variate shift assumption (Ben-David&Urner, ALT2012).",
"Then, I think that the claim of page 6 saying that Domain Adaptation can be performed \"nearly optimally\" has then to be rephrased.",
"I think that results show that the approach is theoretically justified but optimality is not here yet.",
"Theorem 1 is only valid for entropy-based regularizations, what is the difficulty for having a similar result with L2 regularization?",
"-The experimental evaluation on the running time is limited to one particular problem.",
"If this subject is important, it would have been interesting to compare the approaches on other large scale problems and possibly with other implementations.",
"It is also surprising that the efficiency the L2-regularized version is not evaluated.",
"For a paper interesting in large scale aspects, the experimental evaluation is rather weak.",
"The 2 methods compared in Fig 2 reach the same objective values at convergence, but is there any particular difference in the solutions found?",
"-Algorithm 1 is presented without any discussion about complexity, rate of convergence.",
"Could the authors discuss this aspect?",
"The presentation of this algo is a bit short and could deserve more space (in the supplementary) -For the DA application, the considered datasets are classic but not really \"large scale\", anyway this is a minor remark.",
"The setup is not completely clear, since the approach is interesting for out of sample data, so I would expect the map to be computed on a small sample of source data, and then all source instances to be projected on target with the learned map.",
"This point is not very clear and we do not know how many source instances are used to compute the mapping - the mapping is incomplete on this point while this is an interesting aspect of the paper: this justifies even more the large scale aspect is the algo need less examples during learning to perform similar or even better classification.",
"Hyperparameter tuning is another aspect that is not sufficiently precise in the experimental setup: it seems that the parameters are tuned on test (for all methods), which is not fair since target label information will not be available from a practical standpoint.",
"The authors claim that they did not want to compete with state of the art DA, but the approach of Perrot et al., 2016 seems to a have a similar objective and could be used as a baseline.",
"Experiments on generative optimal transport are interesting and probably generate more discussion/perspectives.",
"-- After rebuttal -- Authors have answered to many of my comments, I think this is an interesting paper, I increase my score."
] | {
"criticism": 12,
"example": 2,
"importance_and_relevance": 6,
"materials_and_methods": 19,
"praise": 10,
"presentation_and_reporting": 4,
"results_and_discussion": 14,
"suggestion_and_solution": 7,
"total": 26
} | 0.461538 | 0.076923 | 0.230769 | 0.730769 | 0.384615 | 0.153846 | 0.538462 | 0.269231 | 26 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.846154 | 0.178923 | 2.667231 |
Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification | Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick. | This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification. The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated. Specifically, it proposed to utilize the teacher model from classification to other tasks, and proposed a unified objective function to model the transferability as shown in Equation (5). The two terms in (5), (7) and (9) are used to transfer the knowledge from the teacher model. It maybe possible to claim that the different terms may play different roles for different tasks. However, there should be some general guidelines for choosing these different terms for regularization, rather than just make the claim purely based on the final results. In table 4 and table 5, the results seem to be not so consistent for using the distillation loss. The author mentioned that it is due to the weak teacher model. However, the teacher model just differs in performance with around 3% in accuracy. How could we define the “good” or “bad” of a teacher model for model distillation/transfer? Besides, it seems that the improvement comes largely from the trick of initialization as mentioned in Section 3.2. Hence, it is still not clear which parts contribute to the final performance improvements. It could be better if the authors can report the results from each of the components together. The authors just try the parameter (.alpha, .beta) to be (0,0), (1,0), (0,1) and (1,1). I think the range for both values could be any positive real value, and how about the performance for other sets of combinations, like (0.5, 0.5)? | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nModel Distillation with Knowledge Transfer from Face Classification to Alignment and Verification\nKnowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification. The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated. Specifically, it proposed to utilize the teacher model from classification to other tasks, and proposed a unified objective function to model the transferability as shown in Equation (5). The two terms in (5), (7) and (9) are used to transfer the knowledge from the teacher model. It maybe possible to claim that the different terms may play different roles for different tasks. However, there should be some general guidelines for choosing these different terms for regularization, rather than just make the claim purely based on the final results. In table 4 and table 5, the results seem to be not so consistent for using the distillation loss. The author mentioned that it is due to the weak teacher model. However, the teacher model just differs in performance with around 3% in accuracy. How could we define the “good” or “bad” of a teacher model for model distillation/transfer? Besides, it seems that the improvement comes largely from the trick of initialization as mentioned in Section 3.2. Hence, it is still not clear which parts contribute to the final performance improvements. It could be better if the authors can report the results from each of the components together. The authors just try the parameter (.alpha, .beta) to be (0,0), (1,0), (0,1) and (1,1). I think the range for both values could be any positive real value, and how about the performance for other sets of combinations, like (0.5, 0.5)?",
"role": "assistant"
}
]
| [
" This paper proposed to transfer the classifier from the model for face classification to the task of alignment and verification.",
"The problem setting is interesting and valuable, however, the contribution is not clearly demonstrated.",
"Specifically, it proposed to utilize the teacher model from classification to other tasks, and proposed a unified objective function to model the transferability as shown in Equation (5).",
"The two terms in (5), (7) and (9) are used to transfer the knowledge from the teacher model.",
"It maybe possible to claim that the different terms may play different roles for different tasks.",
"However, there should be some general guidelines for choosing these different terms for regularization, rather than just make the claim purely based on the final results.",
"In table 4 and table 5, the results seem to be not so consistent for using the distillation loss.",
"The author mentioned that it is due to the weak teacher model.",
"However, the teacher model just differs in performance with around 3% in accuracy.",
"How could we define the “good” or “bad” of a teacher model for model distillation/transfer?",
"Besides, it seems that the improvement comes largely from the trick of initialization as mentioned in Section 3.2.",
"Hence, it is still not clear which parts contribute to the final performance improvements.",
"It could be better if the authors can report the results from each of the components together.",
"The authors just try the parameter (.alpha, .beta) to be (0,0), (1,0), (0,1) and (1,1).",
"I think the range for both values could be any positive real value, and how about the performance for other sets of combinations, like (0.5, 0.5)?"
] | {
"criticism": 6,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 10,
"praise": 1,
"presentation_and_reporting": 7,
"results_and_discussion": 4,
"suggestion_and_solution": 4,
"total": 15
} | 0.4 | 0.066667 | 0.066667 | 0.666667 | 0.066667 | 0.466667 | 0.266667 | 0.266667 | 15 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.266667 | 2.203537 | 0.06313 |
Challenges in Disentangling Independent Factors of Variation | We study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs. | This paper studies the challenges of disentangling independent factors of variation under weakly labeled data. A term "reference ambiguity" is introduced, which refers to the fact that there is no guarantee that two data points with same factor of variation will be mapped to the same point if there is only weakly labeled data to that extend. I am having a hard time understanding the message of the paper. The proof in section 3.1, although elementary, is nice. But then the authors train fairly standard networks in experiments (section 4) for datasets studied with these methods in the literature, and they fail to draw any connection to the introduced reference ambiguity concept. As written, the paper to me looks like two separate white papers: {beginning - to -end of section 3}: as a theoretical white paper that lacks experiments, and {section 4}: experiments with some recent methods / datasets (this part is almost like a cute course project). Either the paper lacks to harmonically present the over arching goal, or I have missed if there was any such message that was implicit in between the lines. A rewrite with strong connection between the theory and the experiments is required. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nChallenges in Disentangling Independent Factors of Variation\nWe study the problem of building models that disentangle independent factors of variation. Such models encode features that can efficiently be used for classification and to transfer attributes between different images in image synthesis. As data we use a weakly labeled training set, where labels indicate what single factor has changed between two data samples, although the relative value of the change is unknown. This labeling is of particular interest as it may be readily available without annotation costs. We introduce an autoencoder model and train it through constraints on image pairs and triplets. We show the role of feature dimensionality and adversarial training theoretically and experimentally. We formally prove the existence of the reference ambiguity, which is inherently present in the disentangling task when weakly labeled data is used. The numerical value of a factor has different meaning in different reference frames. When the reference depends on other factors, transferring that factor becomes ambiguous. We demonstrate experimentally that the proposed model can successfully transfer attributes on several datasets, but show also cases when the reference ambiguity occurs.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper studies the challenges of disentangling independent factors of variation under weakly labeled data. A term \"reference ambiguity\" is introduced, which refers to the fact that there is no guarantee that two data points with same factor of variation will be mapped to the same point if there is only weakly labeled data to that extend. I am having a hard time understanding the message of the paper. The proof in section 3.1, although elementary, is nice. But then the authors train fairly standard networks in experiments (section 4) for datasets studied with these methods in the literature, and they fail to draw any connection to the introduced reference ambiguity concept. As written, the paper to me looks like two separate white papers: {beginning - to -end of section 3}: as a theoretical white paper that lacks experiments, and {section 4}: experiments with some recent methods / datasets (this part is almost like a cute course project). Either the paper lacks to harmonically present the over arching goal, or I have missed if there was any such message that was implicit in between the lines. A rewrite with strong connection between the theory and the experiments is required.",
"role": "assistant"
}
]
| [
" This paper studies the challenges of disentangling independent factors of variation under weakly labeled data.",
"A term \"reference ambiguity\" is introduced, which refers to the fact that there is no guarantee that two data points with same factor of variation will be mapped to the same point if there is only weakly labeled data to that extend.",
"I am having a hard time understanding the message of the paper.",
"The proof in section 3.1, although elementary, is nice.",
"But then the authors train fairly standard networks in experiments (section 4) for datasets studied with these methods in the literature, and they fail to draw any connection to the introduced reference ambiguity concept.",
"As written, the paper to me looks like two separate white papers: {beginning - to -end of section 3}: as a theoretical white paper that lacks experiments, and {section 4}: experiments with some recent methods / datasets (this part is almost like a cute course project).",
"Either the paper lacks to harmonically present the over arching goal, or I have missed if there was any such message that was implicit in between the lines.",
"A rewrite with strong connection between the theory and the experiments is required."
] | {
"criticism": 4,
"example": 1,
"importance_and_relevance": 0,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 4,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 8
} | 0.5 | 0.125 | 0 | 0.625 | 0.125 | 0.5 | 0.125 | 0.125 | 8 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2.125 | 1.730439 | 0.394561 |
Learning to Mix n-Step Returns: Generalizing Lambda-Returns for Deep Reinforcement Learning | Reinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state*s value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain. | The authors present confidence-based autodidactic returns, a Deep learning RL method to adjust the weights of an eligibility vector in TD(lambda)-like value estimation to favour more stable estimates of the state. The key to being able to learn these confidence values is to not allow the error of the confidence estimates propagate back though the deep learning architecture. However, the method by which these confidence estimates are refined could be better described. The authors describe these confidences variously as: "some notion of confidence that the agent has in the value function estimate" and "weighing the returns based on a notion of confidence has been explored earlier (White & White, 2016; Thomas et al., 2015)". But the exact method is difficult to piece together from what is written. I believe that the confidence estimates are considered to be part of the critic and the w vector to be part of the theta_c parameters. This would then be captured by the critic gradient for the CAR method that appears towards the end of page 5. If so, this should be stated explicitly. There is another theoretical point that could be clearer. The variation in an autodidactic update of a value function (Equation (4)) depends on a few things, the in variation future value function estimates themselves being just one factor. Another two sources of variation are: the uncertainty over how likely each path is to be taken, and the uncertainty in immediate rewards accumulated as part of some n-step return. In my opinion, the quality of the paper would be much improved by a brief discussion of this, and some reflection on what aspects of these variation contribute to the confidence vectors and what isn*t captured. Nonetheless, I believe that the paper represents an interesting and worthy submission to the conference. I would strongly urge the authors to improve the method description in the camera read version though. A few additional comments are as follows: • The plot in Figure 3 is the leading collection of results to demonstrate the dominance of the authors* adaptive weight approach (CAR) over the A3C (TD(0) estimates) and LRA3C (truncated TD(lambda) estimates) approaches. However, the way the results are presented/plotted, namely the linear plot of the (shifted) relative performance of CAR (and LRA3C) versus A3C, visually inflates the importance of tasks on which CAR (and LRA3C) perform better than A3C, and diminishes the importance of those tasks on which A3C performs better. It would be better kept as a relative value and plotted on a log-scale so that positive and negative improvements can be viewed on an equal setting. • On page 3, when Gt is first mentioned, Gt should really be described first, before the reader is told what it is often replaced with. • On page 3, where delta_t is defined (the j step return TD error, I think the middle term should be • On page 4 and 5, when describing the gradient for the actor and critic, it would be better if these were given their own terminology, but if not, then use of the word respectively in each case would help. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning to Mix n-Step Returns: Generalizing Lambda-Returns for Deep Reinforcement Learning\nReinforcement Learning (RL) can model complex behavior policies for goal-directed sequential decision making tasks. A hallmark of RL algorithms is Temporal Difference (TD) learning: value function for the current state is moved towards a bootstrapped target that is estimated using the next state*s value function. lambda-returns define the target of the RL agent as a weighted combination of rewards estimated by using multiple many-step look-aheads. Although mathematically tractable, the use of exponentially decaying weighting of n-step returns based targets in lambda-returns is a rather ad-hoc design choice. Our major contribution is that we propose a generalization of lambda-returns called Confidence-based Autodidactic Returns (CAR), wherein the RL agent learns the weighting of the n-step returns in an end-to-end manner. In contrast to lambda-returns wherein the RL agent is restricted to use an exponentially decaying weighting scheme, CAR allows the agent to learn to decide how much it wants to weigh the n-step returns based targets. Our experiments, in addition to showing the efficacy of CAR, also empirically demonstrate that using sophisticated weighted mixtures of multi-step returns (like CAR and lambda-returns) considerably outperforms the use of n-step returns. We perform our experiments on the Asynchronous Advantage Actor Critic (A3C) algorithm in the Atari 2600 domain.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors present confidence-based autodidactic returns, a Deep learning RL method to adjust the weights of an eligibility vector in TD(lambda)-like value estimation to favour more stable estimates of the state. The key to being able to learn these confidence values is to not allow the error of the confidence estimates propagate back though the deep learning architecture. However, the method by which these confidence estimates are refined could be better described. The authors describe these confidences variously as: \"some notion of confidence that the agent has in the value function estimate\" and \"weighing the returns based on a notion of confidence has been explored earlier (White & White, 2016; Thomas et al., 2015)\". But the exact method is difficult to piece together from what is written. I believe that the confidence estimates are considered to be part of the critic and the w vector to be part of the theta_c parameters. This would then be captured by the critic gradient for the CAR method that appears towards the end of page 5. If so, this should be stated explicitly. There is another theoretical point that could be clearer. The variation in an autodidactic update of a value function (Equation (4)) depends on a few things, the in variation future value function estimates themselves being just one factor. Another two sources of variation are: the uncertainty over how likely each path is to be taken, and the uncertainty in immediate rewards accumulated as part of some n-step return. In my opinion, the quality of the paper would be much improved by a brief discussion of this, and some reflection on what aspects of these variation contribute to the confidence vectors and what isn*t captured. Nonetheless, I believe that the paper represents an interesting and worthy submission to the conference. I would strongly urge the authors to improve the method description in the camera read version though. A few additional comments are as follows: • The plot in Figure 3 is the leading collection of results to demonstrate the dominance of the authors* adaptive weight approach (CAR) over the A3C (TD(0) estimates) and LRA3C (truncated TD(lambda) estimates) approaches. However, the way the results are presented/plotted, namely the linear plot of the (shifted) relative performance of CAR (and LRA3C) versus A3C, visually inflates the importance of tasks on which CAR (and LRA3C) perform better than A3C, and diminishes the importance of those tasks on which A3C performs better. It would be better kept as a relative value and plotted on a log-scale so that positive and negative improvements can be viewed on an equal setting. • On page 3, when Gt is first mentioned, Gt should really be described first, before the reader is told what it is often replaced with. • On page 3, where delta_t is defined (the j step return TD error, I think the middle term should be • On page 4 and 5, when describing the gradient for the actor and critic, it would be better if these were given their own terminology, but if not, then use of the word respectively in each case would help.",
"role": "assistant"
}
]
| [
" The authors present confidence-based autodidactic returns, a Deep learning RL method to adjust the weights of an eligibility vector in TD(lambda)-like value estimation to favour more stable estimates of the state.",
"The key to being able to learn these confidence values is to not allow the error of the confidence estimates propagate back though the deep learning architecture.",
"However, the method by which these confidence estimates are refined could be better described.",
"The authors describe these confidences variously as: \"some notion of confidence that the agent has in the value function estimate\" and \"weighing the returns based on a notion of confidence has been explored earlier (White & White, 2016; Thomas et al., 2015)\".",
"But the exact method is difficult to piece together from what is written.",
"I believe that the confidence estimates are considered to be part of the critic and the w vector to be part of the theta_c parameters.",
"This would then be captured by the critic gradient for the CAR method that appears towards the end of page 5.",
"If so, this should be stated explicitly.",
"There is another theoretical point that could be clearer.",
"The variation in an autodidactic update of a value function (Equation (4)) depends on a few things, the in variation future value function estimates themselves being just one factor.",
"Another two sources of variation are: the uncertainty over how likely each path is to be taken, and the uncertainty in immediate rewards accumulated as part of some n-step return.",
"In my opinion, the quality of the paper would be much improved by a brief discussion of this, and some reflection on what aspects of these variation contribute to the confidence vectors and what isn*t captured.",
"Nonetheless, I believe that the paper represents an interesting and worthy submission to the conference.",
"I would strongly urge the authors to improve the method description in the camera read version though.",
"A few additional comments are as follows: • The plot in Figure 3 is the leading collection of results to demonstrate the dominance of the authors* adaptive weight approach (CAR) over the A3C (TD(0) estimates) and LRA3C (truncated TD(lambda) estimates) approaches.",
"However, the way the results are presented/plotted, namely the linear plot of the (shifted) relative performance of CAR (and LRA3C) versus A3C, visually inflates the importance of tasks on which CAR (and LRA3C) perform better than A3C, and diminishes the importance of those tasks on which A3C performs better.",
"It would be better kept as a relative value and plotted on a log-scale so that positive and negative improvements can be viewed on an equal setting.",
"• On page 3, when Gt is first mentioned, Gt should really be described first, before the reader is told what it is often replaced with.",
"• On page 3, where delta_t is defined (the j step return TD error, I think the middle term should be • On page 4 and 5, when describing the gradient for the actor and critic, it would be better if these were given their own terminology, but if not, then use of the word respectively in each case would help."
] | {
"criticism": 2,
"example": 4,
"importance_and_relevance": 2,
"materials_and_methods": 14,
"praise": 2,
"presentation_and_reporting": 9,
"results_and_discussion": 12,
"suggestion_and_solution": 8,
"total": 19
} | 0.105263 | 0.210526 | 0.105263 | 0.736842 | 0.105263 | 0.473684 | 0.631579 | 0.421053 | 19 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.789474 | 2.221306 | 0.568167 |
On Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size | Mini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems. | The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al. to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al. paper validating their approach. The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close. Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose. That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation. The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch? Shouldn*t it depend on the previous terms which are included in .mathcal{P}? I*m also not convinced by the dependence on the "optimal" objective function value S^* in equation 6. In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still. Also, the "Robbins 2007" reference should, I believe, be "Robbins and Monro, 1951". These are all relatively minor issues, however. My main criticism is that the experiments only report results in terms of *training* loss. The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually. To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data. Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods. They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem. I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size\nMini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al. to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al. paper validating their approach. The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close. Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose. That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation. The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch? Shouldn*t it depend on the previous terms which are included in .mathcal{P}? I*m also not convinced by the dependence on the \"optimal\" objective function value S^* in equation 6. In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still. Also, the \"Robbins 2007\" reference should, I believe, be \"Robbins and Monro, 1951\". These are all relatively minor issues, however. My main criticism is that the experiments only report results in terms of *training* loss. The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually. To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data. Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods. They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem. I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches.",
"role": "assistant"
}
]
| [
" The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al.",
"to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al.",
"paper validating their approach.",
"The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close.",
"Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose.",
"That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation.",
"The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch?",
"Shouldn*t it depend on the previous terms which are included in .mathcal{P}?",
"I*m also not convinced by the dependence on the \"optimal\" objective function value S^* in equation 6.",
"In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still.",
"Also, the \"Robbins 2007\" reference should, I believe, be \"Robbins and Monro, 1951\".",
"These are all relatively minor issues, however.",
"My main criticism is that the experiments only report results in terms of *training* loss.",
"The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually.",
"To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data.",
"Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods.",
"They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem.",
"I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches."
] | {
"criticism": 6,
"example": 3,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 11,
"suggestion_and_solution": 5,
"total": 18
} | 0.333333 | 0.166667 | 0.055556 | 0.666667 | 0.111111 | 0.111111 | 0.611111 | 0.277778 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.938773 | 0.394561 |
Learning Sparse Neural Networks through L_0 Regularization | We propose a practical method for norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of regularization. However, since the norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected regularized objective is differentiable with respect to the distribution parameters. We further propose the emph{hard concrete} distribution for the gates, which is obtained by ``stretching** a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer. | This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization. The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations. Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components. The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods. Pros: - The paper is clearly written, self-contained and a pleasure to read. - Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc. - If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Sparse Neural Networks through L_0 Regularization\nWe propose a practical method for norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of regularization. However, since the norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected regularized objective is differentiable with respect to the distribution parameters. We further propose the emph{hard concrete} distribution for the gates, which is obtained by ``stretching** a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization. The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations. Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components. The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods. Pros: - The paper is clearly written, self-contained and a pleasure to read. - Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc. - If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant",
"role": "assistant"
}
]
| [
" This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization.",
"The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations.",
"Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components.",
"The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods.",
"Pros: - The paper is clearly written, self-contained and a pleasure to read.",
"- Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc.",
"- If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.857143 | 0.285714 | 0.142857 | 0.571429 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.714286 | 2.146118 | 0.568167 |
On Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size | Mini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems. | The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al. to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al. paper validating their approach. The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close. Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose. That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation. The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch? Shouldn*t it depend on the previous terms which are included in .mathcal{P}? I*m also not convinced by the dependence on the "optimal" objective function value S^* in equation 6. In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still. Also, the "Robbins 2007" reference should, I believe, be "Robbins and Monro, 1951". These are all relatively minor issues, however. My main criticism is that the experiments only report results in terms of *training* loss. The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually. To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data. Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods. They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem. I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nOn Batch Adaptive Training for Deep Learning: Lower Loss and Larger Step Size\nMini-batch gradient descent and its variants are commonly used in deep learning. The principle of mini-batch gradient descent is to use noisy gradient calculated on a batch to estimate the real gradient, thus balancing the computation cost per iteration and the uncertainty of noisy gradient. However, its batch size is a fixed hyper-parameter requiring manual setting before training the neural network. Yin et al. (2017) proposed a batch adaptive stochastic gradient descent (BA-SGD) that can dynamically choose a proper batch size as learning proceeds. We extend the BA-SGD to momentum algorithm and evaluate both the BA-SGD and the batch adaptive momentum (BA-Momentum) on two deep learning tasks from natural language processing to image classification. Experiments confirm that batch adaptive methods can achieve a lower loss compared with mini-batch methods after scanning the same epochs of data. Furthermore, our BA-Momentum is more robust against larger step sizes, in that it can dynamically enlarge the batch size to reduce the larger uncertainty brought by larger step sizes. We also identified an interesting phenomenon, batch size boom. The code implementing batch adaptive framework is now open source, applicable to any gradient-based optimization problems.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al. to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al. paper validating their approach. The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close. Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose. That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation. The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch? Shouldn*t it depend on the previous terms which are included in .mathcal{P}? I*m also not convinced by the dependence on the \"optimal\" objective function value S^* in equation 6. In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still. Also, the \"Robbins 2007\" reference should, I believe, be \"Robbins and Monro, 1951\". These are all relatively minor issues, however. My main criticism is that the experiments only report results in terms of *training* loss. The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually. To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data. Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods. They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem. I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches.",
"role": "assistant"
}
]
| [
" The authors propose extending the recently-proposed adaptive batch-size approach of Yin et al.",
"to an update that includes momentum, and perform more comprehensive experiments than in the Yin et al.",
"paper validating their approach.",
"The basic idea makes a great deal of intuitive sense: inaccurate gradient estimates are fine in early iterations, when we*re far from convergence, and accurate estimates are more valuable in later iterations, when we*re close.",
"Finding the optimal trade-off between computational cost and expected decrease seems like the most natural way to accomplish this, and this is precisely what they propose.",
"That said, I*m not totally convinced by the derivation of sections 2 and 3: the Gaussian assumption is fine as a heuristic (and they don*t really claim that it*s anything else), but I don*t feel that the proposed algorithm really rests on a solid theoretical foundation.",
"The extension to the momentum case (section 3) seems to be more-or-less straightforward, but I do have a question about equation 15: am I misreading this, or is it saying that the variance of the momentum update .mathcal{P} is the same as the variance of the most recent minibatch?",
"Shouldn*t it depend on the previous terms which are included in .mathcal{P}?",
"I*m also not convinced by the dependence on the \"optimal\" objective function value S^* in equation 6.",
"In their algorithm, they take S^* to be zero, which is a good conservative choice for a nonnegative loss, but the fact that this quantity is present in the first place, as a user-specified parameter, makes me nervous, since even for a nonnegative loss, the optimum might be quite far from zero, and on a non-convex problem, the eventual local optimum at which we eventually settle down may be further still.",
"Also, the \"Robbins 2007\" reference should, I believe, be \"Robbins and Monro, 1951\".",
"These are all relatively minor issues, however.",
"My main criticism is that the experiments only report results in terms of *training* loss.",
"The use of adaptive batch sizes does indeed appear to result in faster convergence in terms of training loss, but the plots are in log scale (which I do think is the right way to present it), so the difference is smaller in reality than it appears visually.",
"To determine whether this improvement in training performance is a *real* improvement, I think we need to see the performance (in terms of accuracy, not loss) on held-out data.",
"Finally, as the authors mention in the final paragraph of their conclusion, some recent work has indicated that large-batch methods may generalize worse than small-batch methods.",
"They claim that, by using small batches early and large batches late, they may avoid this issue, and I don*t necessarily disagree, but I think an argument could be made in the opposite direction: that since the proposed approach becomes a large-batch method in the later iterations, it may suffer from this problem.",
"I think that this is worth exploring further, and, again, without results on testing data being presented, a reader can*t make any determination about how well the proposed method generalizes, compared to fixed-size minibatches."
] | {
"criticism": 6,
"example": 3,
"importance_and_relevance": 1,
"materials_and_methods": 12,
"praise": 2,
"presentation_and_reporting": 2,
"results_and_discussion": 11,
"suggestion_and_solution": 5,
"total": 18
} | 0.333333 | 0.166667 | 0.055556 | 0.666667 | 0.111111 | 0.111111 | 0.611111 | 0.277778 | 18 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.333333 | 1.938773 | 0.394561 |
Learning Sparse Neural Networks through L_0 Regularization | We propose a practical method for norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of regularization. However, since the norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected regularized objective is differentiable with respect to the distribution parameters. We further propose the emph{hard concrete} distribution for the gates, which is obtained by ``stretching** a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer. | This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization. The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations. Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components. The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods. Pros: - The paper is clearly written, self-contained and a pleasure to read. - Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc. - If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Sparse Neural Networks through L_0 Regularization\nWe propose a practical method for norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of regularization. However, since the norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected regularized objective is differentiable with respect to the distribution parameters. We further propose the emph{hard concrete} distribution for the gates, which is obtained by ``stretching** a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization. The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations. Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components. The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods. Pros: - The paper is clearly written, self-contained and a pleasure to read. - Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc. - If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant",
"role": "assistant"
}
]
| [
" This paper presents a continuous surrogate for the ell_0 norm and focuses on its applications in regularized empirical regularized minimization.",
"The proposed continuous relaxation scheme allows for gradient based-stochastic optimization for binary discrete variables under the reparameterization trick, and extends the original binary concrete distribution by allowing the parameter taking values of exact zeros and ones, with additional stretching and thresholding operations.",
"Under a compound construction of sparsity, the proposed approach can easily incorporate group sparsity by sharing supports among the grouped variables, or be combined with other types of regularizations on the magnitude of non-zero components.",
"The efficacy of the proposed method in sparsification and speedup is demonstrated in two experiments with comparisons against a few baseline methods.",
"Pros: - The paper is clearly written, self-contained and a pleasure to read.",
"- Based on the evidence provided, the procedure seems to be a useful continuous relaxation scheme to consider in handling optimization with spike and slab regularization Cons: - It would be interesting to see how the induced penalty behaves in terms shrinkage comparing against ell_0 and other ell_p choices - It is unclear what properties does the proposed hard-concrete distribution have, e.g., closed-form density, convexity, etc.",
"- If the authors can offer a rigorous analysis on the influence of base concrete distribution and provide more guidance on how to choose the stretching parameters in practice, this paper would be more significant"
] | {
"criticism": 1,
"example": 0,
"importance_and_relevance": 2,
"materials_and_methods": 6,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 4,
"suggestion_and_solution": 3,
"total": 7
} | 0.142857 | 0 | 0.285714 | 0.857143 | 0.285714 | 0.142857 | 0.571429 | 0.428571 | 7 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.714286 | 2.146118 | 0.568167 |
Learning Differentially Private Recurrent Language Models | We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset. | This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models. It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model. The idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community. This work makes a pretty significant contribution to such topic. It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy. The experiment shows good privacy and utility. The presentation of the paper can be improved a bit. For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper. Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLearning Differentially Private Recurrent Language Models\nWe demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models. It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model. The idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community. This work makes a pretty significant contribution to such topic. It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy. The experiment shows good privacy and utility. The presentation of the paper can be improved a bit. For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper. Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work.",
"role": "assistant"
}
]
| [
" This paper extends the previous results on differentially private SGD to user-level differentially private recurrent language models.",
"It experimentally shows that the proposed differentially private LSTM achieves comparable utility compared to the non-private model.",
"The idea of training differentially private neural network is interesting and very important to the machine learning + differential privacy community.",
"This work makes a pretty significant contribution to such topic.",
"It adapts techniques from some previous work to address the difficulties in training language model and providing user-level privacy.",
"The experiment shows good privacy and utility.",
"The presentation of the paper can be improved a bit.",
"For example, it might be better to have a preliminary section before Section2 introducing the original differentially private SGD algorithm with clipping, the original FedAvg and FedSGD, and moments accountant as well as privacy amplification; otherwise, it can be pretty difficult for readers who are not familiar with those concepts to fully understand the paper.",
"Such introduction can also help readers understand the difficulty of adapting the original algorithms and appreciate the contributions of this work."
] | {
"criticism": 1,
"example": 1,
"importance_and_relevance": 4,
"materials_and_methods": 6,
"praise": 4,
"presentation_and_reporting": 4,
"results_and_discussion": 1,
"suggestion_and_solution": 3,
"total": 9
} | 0.111111 | 0.111111 | 0.444444 | 0.666667 | 0.444444 | 0.444444 | 0.111111 | 0.333333 | 9 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.666667 | 2.414148 | 0.252519 |
No Spurious Local Minima in a Two Hidden Unit ReLU Network | Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this. A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization. We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global. This combined with recent work of Lee et al. (2017); Lee et al. (2016) show that gradient descent converges to the global minimizer. | In this paper the authors studied the theoretical properties of manifold descent approaches in a standard regression problem, whose regressor is a simple neural network. Leveraged by two recent results in global optimization, they showed that with a simple two-layer ReLU network with two hidden units, the problem with a standard MSE population loss function does not have spurious local minimum points. Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods. In general I found this paper clearly written and technically sound. I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures. Contribution: As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units. The analysis becomes considerably more complicated, and the contribution seems to be novel and significant. I am not sure why did the authors mentioned the work on over-parameterization though. It doesn*t seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small). Comments on the Assumptions: - Please explain the motivation behind the standard Gaussian assumption of the input vector x. - Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^.top w_2=0 (or the acute angle assumption in Section 6). Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting. While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results. General Comment: The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs. From the analysis in the main paper, I believe the theoretical contribution is correct and sound. While I appreciate the technical contributions, in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples). Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector, and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nNo Spurious Local Minima in a Two Hidden Unit ReLU Network\nDeep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this. A key question in optimization is to understand when the optimization landscape of a neural network is amenable to gradient-based optimization. We focus on a simple neural network two-layer ReLU network with two hidden units, and show that all local minimizers are global. This combined with recent work of Lee et al. (2017); Lee et al. (2016) show that gradient descent converges to the global minimizer.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper the authors studied the theoretical properties of manifold descent approaches in a standard regression problem, whose regressor is a simple neural network. Leveraged by two recent results in global optimization, they showed that with a simple two-layer ReLU network with two hidden units, the problem with a standard MSE population loss function does not have spurious local minimum points. Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods. In general I found this paper clearly written and technically sound. I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures. Contribution: As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units. The analysis becomes considerably more complicated, and the contribution seems to be novel and significant. I am not sure why did the authors mentioned the work on over-parameterization though. It doesn*t seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small). Comments on the Assumptions: - Please explain the motivation behind the standard Gaussian assumption of the input vector x. - Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^.top w_2=0 (or the acute angle assumption in Section 6). Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting. While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results. General Comment: The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs. From the analysis in the main paper, I believe the theoretical contribution is correct and sound. While I appreciate the technical contributions, in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples). Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector, and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems.",
"role": "assistant"
}
]
| [
" In this paper the authors studied the theoretical properties of manifold descent approaches in a standard regression problem, whose regressor is a simple neural network.",
"Leveraged by two recent results in global optimization, they showed that with a simple two-layer ReLU network with two hidden units, the problem with a standard MSE population loss function does not have spurious local minimum points.",
"Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods.",
"In general I found this paper clearly written and technically sound.",
"I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures.",
"Contribution: As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units.",
"The analysis becomes considerably more complicated, and the contribution seems to be novel and significant.",
"I am not sure why did the authors mentioned the work on over-parameterization though.",
"It doesn*t seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small).",
"Comments on the Assumptions: - Please explain the motivation behind the standard Gaussian assumption of the input vector x.",
"- Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^.top w_2=0 (or the acute angle assumption in Section 6).",
"Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting.",
"While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results.",
"General Comment: The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs.",
"From the analysis in the main paper, I believe the theoretical contribution is correct and sound.",
"While I appreciate the technical contributions, in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples).",
"Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector, and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems."
] | {
"criticism": 5,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 11,
"praise": 5,
"presentation_and_reporting": 3,
"results_and_discussion": 10,
"suggestion_and_solution": 4,
"total": 17
} | 0.294118 | 0 | 0.235294 | 0.647059 | 0.294118 | 0.176471 | 0.588235 | 0.235294 | 17 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.470588 | 2.218069 | 0.252519 |
Policy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus | In recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem. | The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. At least one of those papers ought to be cited. It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. Without that baseline it*s not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nPolicy Gradient For Multidimensional Action Spaces: Action Sampling and Entropy Bonus\nIn recent years deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. In this paper, we develop a novel policy gradient methodology for the case of large multidimensional discrete action spaces. We propose two approaches for creating parameterized policies: LSTM parameterization and a Modified MDP (MMDP) giving rise to Feed-Forward Network (FFN) parameterization. Both of these approaches provide expressive models to which backpropagation can be applied for training. We then consider entropy bonus, which is typically added to the reward function to enhance exploration. In the case of high-dimensional action spaces, calculating the entropy and the gradient of the entropy requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space. On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus. A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures. In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods. At least one of those papers ought to be cited. It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network. Without that baseline it*s not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway. The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions. A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well. A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot.",
"role": "assistant"
}
]
| [
" The authors present two autoregressive models for sampling action probabilities from a factorized discrete action space.",
"On a multi-agent gridworld task and a multi-agent multi-armed bandit task, the proposed method seems to benefit from their lower-variance entropy estimator for exploration bonus.",
"A few key citations were missing - notably the LSTM model they propose is a clear instance of an autoregressive density estimator, as in PixelCNN, WaveNet and other recently popular deep architectures.",
"In that context, this work can be viewed as applying deep autoregressive density estimators to policy gradient methods.",
"At least one of those papers ought to be cited.",
"It also seems like a simple, obvious baseline is missing from their experiments - simply independently outputting D independent softmaxes from the policy network.",
"Without that baseline it*s not clear that any actual benefit is gained by modeling the joint distribution between actions, especially since the optimal policy for an MDP is provably deterministic anyway.",
"The method could even be made to capture dependencies between different actions by adding a latent probabilistic layer in the middle of the policy network, inducing marginal dependencies between different actions.",
"A direct comparison against one of the related methods in the discussion section would help better contextualize the paper as well.",
"A final point on clarity of presentation - in keeping with the convention in the field, the readability of the tables could be improved by putting the top-performing models in bold, and Table 2 should almost certainly be replaced by a boxplot."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 1,
"materials_and_methods": 9,
"praise": 0,
"presentation_and_reporting": 3,
"results_and_discussion": 3,
"suggestion_and_solution": 4,
"total": 10
} | 0.3 | 0 | 0.1 | 0.9 | 0 | 0.3 | 0.3 | 0.4 | 10 | 1 | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 2.3 | 2.157958 | 0.142042 |
Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration | Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations. | [Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written. In light of these changes, I*ve updated my score.] Summary: The authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a "designer" to manually specify the space of possible goals. This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a "good" distribution of goal states. However, even after multiple reads, much of the remainder of the paper remains unclear. Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow. There are too many metrics and too few conclusions for this paper. The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper. Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results. For instance, in the middle of page 9, it is noted that some of the techniques "yield almost as low performance as" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors* approach. This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques). Other comments: - There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the "Meta-Policy MSE is not a simple to interpret" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well. - The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided? - The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI? - The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper. - The authors need to be clearer about their notation in a number of places. For instance, they use .gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup. - It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained). In short, the paper has some interesting ideas, yet lacks a clear takeaway message. Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results. Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing. Pros: - A clear, well-written abstract and introduction - While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a "feature space" for their particular family of problems. Cons: - The figure captions are all very "matter-of-fact" and, while they explain what each figure shows, provide no explanation of the results. The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone). - There is not much significance in the current form of the paper, owing to the lack of clear message. While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results. I.e. it is difficult for me to easily visualize all of the "moving parts" of this work: a figure showing the relationship bet - Too many individual ideas are presented in the paper, hurting clarity. As a result, the paper feels scattered. The authors do not have a clear message that neatly ties the results together. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nUnsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration\nIntrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n [Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written. In light of these changes, I*ve updated my score.] Summary: The authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a \"designer\" to manually specify the space of possible goals. This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a \"good\" distribution of goal states. However, even after multiple reads, much of the remainder of the paper remains unclear. Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow. There are too many metrics and too few conclusions for this paper. The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper. Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results. For instance, in the middle of page 9, it is noted that some of the techniques \"yield almost as low performance as\" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors* approach. This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques). Other comments: - There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the \"Meta-Policy MSE is not a simple to interpret\" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well. - The authors do not list how many observations they are given before the deep representations are learned. Why is this? Additionally, is it possible that not enough data was provided? - The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter. What would happen if a dimension of 2 were chosen? Would the performance of the deep representation models improve? Would their performance rival that of RGE-FI? - The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text. It would improve the clarity of the paper. - The authors need to be clearer about their notation in a number of places. For instance, they use .gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup. - It is never explicitly mentioned exactly how the deep representation learning methods will be used. It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained). In short, the paper has some interesting ideas, yet lacks a clear takeaway message. Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results. Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing. Pros: - A clear, well-written abstract and introduction - While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a \"feature space\" for their particular family of problems. Cons: - The figure captions are all very \"matter-of-fact\" and, while they explain what each figure shows, provide no explanation of the results. The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone). - There is not much significance in the current form of the paper, owing to the lack of clear message. While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results. I.e. it is difficult for me to easily visualize all of the \"moving parts\" of this work: a figure showing the relationship bet - Too many individual ideas are presented in the paper, hurting clarity. As a result, the paper feels scattered. The authors do not have a clear message that neatly ties the results together.",
"role": "assistant"
}
]
| [
" [Edit: After revisions, the authors have made a good-faith effort to improve the clarity and presentation of their paper: figures have been revised, key descriptions have been added, and (perhaps most critically) a couple of small sections outlining the contributions and significance of this work have been written.",
"In light of these changes, I*ve updated my score.]",
"Summary: The authors aim to overcome one of the central limitations of intrinsically motivated goal exploration algorithms by learning a representation without relying on a \"designer\" to manually specify the space of possible goals.",
"This work is significant as it would allow one to learn a policy in complex environments even in the absence of a such a designer or even a clear notion of what would constitute a \"good\" distribution of goal states.",
"However, even after multiple reads, much of the remainder of the paper remains unclear.",
"Many important details, including the metrics by which the authors evaluate performance of their work, can only be found in the appendix; this makes the paper very difficult to follow.",
"There are too many metrics and too few conclusions for this paper.",
"The authors introduce a handful of metrics for evaluating the performance of their approach; I am unfamiliar with a couple of these metrics and there is not much exposition justifying their significance and inclusion in the paper.",
"Furthermore, there are myriad plots showing the performance of the different algorithms, but very little explanation of the importance of the results.",
"For instance, in the middle of page 9, it is noted that some of the techniques \"yield almost as low performance as\" the randomized baseline, yet no attempt is made to explain why this might be the case or what implications it has for the authors* approach.",
"This problem pervades the paper: many metrics are introduced for how we might want to evaluate these techniques, yet there is no provided reason to prefer one over another (or even why we might want to prefer them over the classical techniques).",
"Other comments: - There remain open questions about the quality of the MSE numbers; there are a number of instances in which the authors cite that the \"Meta-Policy MSE is not a simple to interpret\" (The remainder of this sentence is incomplete in the paper), yet little is done to further justify why it was used here, or why many of the deep representation techniques do not perform very well.",
"- The authors do not list how many observations they are given before the deep representations are learned.",
"Why is this?",
"Additionally, is it possible that not enough data was provided?",
"- The authors assert that 10 dimensions was chosen arbitrarily for the size of the latent space, but this seems like a hugely important choice of parameter.",
"What would happen if a dimension of 2 were chosen?",
"Would the performance of the deep representation models improve?",
"Would their performance rival that of RGE-FI?",
"- The authors should motivate the algorithm on page 6 in words before simply inserting it into the body of the text.",
"It would improve the clarity of the paper.",
"- The authors need to be clearer about their notation in a number of places.",
"For instance, they use .gamma to represent the distribution of goals, yet it does not appear on page 7, in the experimental setup.",
"- It is never explicitly mentioned exactly how the deep representation learning methods will be used.",
"It is pretty clear to those who are familiar with the techniques that the latent space is what will be used, but a few equations would be instructive (and would make the paper more self-contained).",
"In short, the paper has some interesting ideas, yet lacks a clear takeaway message.",
"Instead, it contains a large number of metrics and computes them for a host of different possible variations of the proposed techniques, and does not include significant explanation for the results.",
"Even given my lack of expertise in this subject, the paper has some clear flaws that need addressing.",
"Pros: - A clear, well-written abstract and introduction - While I am not experienced enough in the field to really comment on the originality, it does seem that the approach the authors have taken is original, and applies deep learning techniques to avoid having to custom-design a \"feature space\" for their particular family of problems.",
"Cons: - The figure captions are all very \"matter-of-fact\" and, while they explain what each figure shows, provide no explanation of the results.",
"The figure captions should be as self-contained as possible (I should be able to understand the figures and the implications of the results from the captions alone).",
"- There is not much significance in the current form of the paper, owing to the lack of clear message.",
"While the overarching problem is potentially interesting, the authors seem to make very little effort to draw conclusions from their results.",
"I.e.",
"it is difficult for me to easily visualize all of the \"moving parts\" of this work: a figure showing the relationship bet - Too many individual ideas are presented in the paper, hurting clarity.",
"As a result, the paper feels scattered.",
"The authors do not have a clear message that neatly ties the results together."
] | {
"criticism": 20,
"example": 4,
"importance_and_relevance": 7,
"materials_and_methods": 17,
"praise": 3,
"presentation_and_reporting": 17,
"results_and_discussion": 10,
"suggestion_and_solution": 6,
"total": 37
} | 0.540541 | 0.108108 | 0.189189 | 0.459459 | 0.081081 | 0.459459 | 0.27027 | 0.162162 | 37 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2.27027 | -6.820409 | 9.09068 |
Long-term Forecasting using Tensor-Train RNNs | We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data. | This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks. The technical approach seems well motivated, plausible, and potentially a good contribution, but the experimental work has numerous weaknesses which limit the significance of the work in current form. For one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting. It would be far more convincing if the author’s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results. Otherwise, the reader is left with concerns that the author’s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below). One weakness with the experiments is that it is not clear that they were fair to RNN or LSTM, for example, in terms of giving them the same computation as the TT-RNNs. Section Hyper-parameter Analysis” on page 7 explains that they determined best TT rank and lags via grid search. But presumably larger values for rank and lag require more computation, so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods. As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies, so this seems like a very serious and plausible concern. Also, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN. But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g. for number of lags) via grid search as their proposed TT-RNN benefited from. Again, the concern here is that the experiments are plausibly not being fair to all methods equally. So, due to the weaknesses in the experimental work, this work seems a bit premature. It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nLong-term Forecasting using Tensor-Train RNNs\nWe present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higher-order structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks. The technical approach seems well motivated, plausible, and potentially a good contribution, but the experimental work has numerous weaknesses which limit the significance of the work in current form. For one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting. It would be far more convincing if the author’s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results. Otherwise, the reader is left with concerns that the author’s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below). One weakness with the experiments is that it is not clear that they were fair to RNN or LSTM, for example, in terms of giving them the same computation as the TT-RNNs. Section Hyper-parameter Analysis” on page 7 explains that they determined best TT rank and lags via grid search. But presumably larger values for rank and lag require more computation, so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods. As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies, so this seems like a very serious and plausible concern. Also, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN. But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g. for number of lags) via grid search as their proposed TT-RNN benefited from. Again, the concern here is that the experiments are plausibly not being fair to all methods equally. So, due to the weaknesses in the experimental work, this work seems a bit premature. It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA.",
"role": "assistant"
}
]
| [
" This work addresses an important and outstanding problem: accurate long-term forecasting using deep recurrent networks.",
"The technical approach seems well motivated, plausible, and potentially a good contribution, but the experimental work has numerous weaknesses which limit the significance of the work in current form.",
"For one, the 3 datasets tested are not established as among the most suitable, well-recognized benchmarks for evaluating long-term forecasting.",
"It would be far more convincing if the author’s used well-established benchmark data, for which existing best methods have already been well-tuned to get their best results.",
"Otherwise, the reader is left with concerns that the author’s may not have used the best settings for the baseline method results reported, which indeed is a concern here (see below).",
"One weakness with the experiments is that it is not clear that they were fair to RNN or LSTM, for example, in terms of giving them the same computation as the TT-RNNs.",
"Section Hyper-parameter Analysis” on page 7 explains that they determined best TT rank and lags via grid search.",
"But presumably larger values for rank and lag require more computation, so to be fair to RNN and LSTM they should be given more computation as well, for example allowing them more hidden units than TT-RNNs get, so that overall computation cost is the same for all 3 methods.",
"As far as this reviewer can tell, the authors offer no experiments to show that a larger number of units for RNN or LSTM would not have helped them in improving long-term forecasting accuracies, so this seems like a very serious and plausible concern.",
"Also, on page 6 the authors say that they tried ARMA but that it performed about 5% worse than LSTM, and thus dismissing direct comparisons of ARMA against TT-RNN.",
"But they are unclear whether they gave ARMA as much hyper-parameter tuning (e.g.",
"for number of lags) via grid search as their proposed TT-RNN benefited from.",
"Again, the concern here is that the experiments are plausibly not being fair to all methods equally.",
"So, due to the weaknesses in the experimental work, this work seems a bit premature.",
"It should more clearly establish that their proposed TT-RNN are indeed performing well compared to existing SOTA."
] | {
"criticism": 8,
"example": 2,
"importance_and_relevance": 3,
"materials_and_methods": 15,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 3,
"suggestion_and_solution": 3,
"total": 15
} | 0.533333 | 0.133333 | 0.2 | 1 | 0.133333 | 0 | 0.2 | 0.2 | 15 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 2.4 | 2.33687 | 0.06313 |
Explicit Induction Bias for Transfer Learning with Convolutional Networks | In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network. | The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning. Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the ones previously learned when solving a source task. + The paper is easy to read and well organized + The advantage of the proposed regularization against the more standard L2 regularization is clearly visible from the experiments - The idea per se is not new: there is a list of shallow learning methods for transfer learning based on the same L2 regularization choice [Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007] [Learning categories from few examples with multi model knowledge transfer, PAMI 2014] [From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013] I believe this literature should be discussed in the related work section - It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a fixed task, however, this solution seems to work quite well in the proposed experimental settings. From my understanding L2-SP-Fisher can be considered the best competitor of L2-SP so I think the paper should dedicate more space to the analysis of their difference and similarities both from the theoretical and experimental point of view. For instance: -- adding the L2-SP-Fisher results in table 2 -- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExplicit Induction Bias for Transfer Learning with Convolutional Networks\nIn inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning. Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the ones previously learned when solving a source task. + The paper is easy to read and well organized + The advantage of the proposed regularization against the more standard L2 regularization is clearly visible from the experiments - The idea per se is not new: there is a list of shallow learning methods for transfer learning based on the same L2 regularization choice [Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007] [Learning categories from few examples with multi model knowledge transfer, PAMI 2014] [From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013] I believe this literature should be discussed in the related work section - It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a fixed task, however, this solution seems to work quite well in the proposed experimental settings. From my understanding L2-SP-Fisher can be considered the best competitor of L2-SP so I think the paper should dedicate more space to the analysis of their difference and similarities both from the theoretical and experimental point of view. For instance: -- adding the L2-SP-Fisher results in table 2 -- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher",
"role": "assistant"
}
]
| [
" The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning.",
"Specifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the ones previously learned when solving a source task.",
"+ The paper is easy to read and well organized + The advantage of the proposed regularization against the more standard L2 regularization is clearly visible from the experiments - The idea per se is not new: there is a list of shallow learning methods for transfer learning based on the same L2 regularization choice [Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007] [Learning categories from few examples with multi model knowledge transfer, PAMI 2014] [From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013] I believe this literature should be discussed in the related work section - It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a fixed task, however, this solution seems to work quite well in the proposed experimental settings.",
"From my understanding L2-SP-Fisher can be considered the best competitor of L2-SP so I think the paper should dedicate more space to the analysis of their difference and similarities both from the theoretical and experimental point of view.",
"For instance: -- adding the L2-SP-Fisher results in table 2 -- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher"
] | {
"criticism": 0,
"example": 1,
"importance_and_relevance": 1,
"materials_and_methods": 5,
"praise": 1,
"presentation_and_reporting": 2,
"results_and_discussion": 2,
"suggestion_and_solution": 3,
"total": 5
} | 0 | 0.2 | 0.2 | 1 | 0.2 | 0.4 | 0.4 | 0.6 | 5 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 1.989924 | 1.010076 |
Exponentially vanishing sub-optimal local minima in multilayer neural networks | Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with Omega(N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N
ightarrowinfty datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d_0= ilde{Omega}(sqrt{N}), and a more realistic number of d_1= ilde{Omega}(N/d_0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d_0 = 16 hidden neurons. | This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima? The approach in this paper is to study a standard MNN with one hidden layer. They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially. They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution. Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice. This result seems interesting, although it is clearly not sufficient to explain even the success on the setting studied in this paper, since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: to estimate the latter, the size of each basin of attraction should be taken into account. The authors are aware of this point and mention it as a disadvantage. However, since this question in general is a difficult one, any progress might be considered interesting. Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima. The paper is well written and well presented, and the limitations of the approach, as well as its advantages over previous work, are clearly explained. As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work. I did not verify the proofs in the appendix. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nExponentially vanishing sub-optimal local minima in multilayer neural networks\nBackground: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with Omega(N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N\rightarrowinfty datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d_0=\tilde{Omega}(sqrt{N}), and a more realistic number of d_1=\tilde{Omega}(N/d_0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d_0 = 16 hidden neurons.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima? The approach in this paper is to study a standard MNN with one hidden layer. They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially. They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution. Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice. This result seems interesting, although it is clearly not sufficient to explain even the success on the setting studied in this paper, since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: to estimate the latter, the size of each basin of attraction should be taken into account. The authors are aware of this point and mention it as a disadvantage. However, since this question in general is a difficult one, any progress might be considered interesting. Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima. The paper is well written and well presented, and the limitations of the approach, as well as its advantages over previous work, are clearly explained. As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work. I did not verify the proofs in the appendix.",
"role": "assistant"
}
]
| [
" This paper studies the question: Why does SGD on deep network is often successful, despite the fact that the objective induces bad local minima?",
"The approach in this paper is to study a standard MNN with one hidden layer.",
"They show that in an overparametrized regime, where the number of parameters is logarithmically larger than the number of parameters in the input, the ratio between the number of (bad) local minima to the number of global minima decays exponentially.",
"They show this for a piecewise linear activation function, and input drawn from a standard Normal distribution.",
"Their improvement over previous work is that the required overparameterization is fairly moderate, and that the network that they considered is similar to ones used in practice.",
"This result seems interesting, although it is clearly not sufficient to explain even the success on the setting studied in this paper, since the number of minima of a certain type does not correspond to the probability of the SGD ending in one: to estimate the latter, the size of each basin of attraction should be taken into account.",
"The authors are aware of this point and mention it as a disadvantage.",
"However, since this question in general is a difficult one, any progress might be considered interesting.",
"Hopefully, in future work it would be possible to also bound the probability of starting in one of the basins of attraction of bad local minima.",
"The paper is well written and well presented, and the limitations of the approach, as well as its advantages over previous work, are clearly explained.",
"As I am not an expert on the previous works in this field, my judgment relies mostly on this work and its representation of previous work.",
"I did not verify the proofs in the appendix."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 4,
"materials_and_methods": 7,
"praise": 2,
"presentation_and_reporting": 1,
"results_and_discussion": 6,
"suggestion_and_solution": 3,
"total": 12
} | 0.25 | 0 | 0.333333 | 0.583333 | 0.166667 | 0.083333 | 0.5 | 0.25 | 12 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 2.166667 | 2.150884 | 0.015782 |
Deep Boosting of Diverse Experts | In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. Our experimental results have demonstrated that our deep boosting algorithm can significantly improve the accuracy rates on large-scale visual recognition. | This paper consider a version of boosting where in each iteration only class weights are updated rather than sample weights and apply that to a series of CNNs for object recognition tasks. While the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions: - AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners. It*s not clear what kind of loss function is really being optimised here. It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn*t make it not clear what is that really gets optimised at the end. - While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it*s expected that their samples will have higher weights and hence the next base model will focus more on them. And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup. - Choice of using large CNNs as base models for boosting isn*t appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can*t achieve any convergence that often is the target of boosting models with many base learners. - Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven*t been compared against (e.g. ImageNet experiment compares to 2 years old method) 2) comparisons to using normal AdaBoost on more complex methods haven*t been studied (other than the MNIST) 3) comparison to simply ensembling with random initialisations. Other comments: - Paper would benefit from writing improvements to make it read better. - "simply use the weighted error function": I don*t think this is correct, AdaBoost loss function is an exponential loss. When you train the base learners, their loss functions will become weighted. - "to replace the softmax error function (used in deep learning)": I don*t think we have softmax error function | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nDeep Boosting of Diverse Experts\nIn this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. Our experimental results have demonstrated that our deep boosting algorithm can significantly improve the accuracy rates on large-scale visual recognition.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n This paper consider a version of boosting where in each iteration only class weights are updated rather than sample weights and apply that to a series of CNNs for object recognition tasks. While the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions: - AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners. It*s not clear what kind of loss function is really being optimised here. It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn*t make it not clear what is that really gets optimised at the end. - While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it*s expected that their samples will have higher weights and hence the next base model will focus more on them. And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup. - Choice of using large CNNs as base models for boosting isn*t appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can*t achieve any convergence that often is the target of boosting models with many base learners. - Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven*t been compared against (e.g. ImageNet experiment compares to 2 years old method) 2) comparisons to using normal AdaBoost on more complex methods haven*t been studied (other than the MNIST) 3) comparison to simply ensembling with random initialisations. Other comments: - Paper would benefit from writing improvements to make it read better. - \"simply use the weighted error function\": I don*t think this is correct, AdaBoost loss function is an exponential loss. When you train the base learners, their loss functions will become weighted. - \"to replace the softmax error function (used in deep learning)\": I don*t think we have softmax error function",
"role": "assistant"
}
]
| [
" This paper consider a version of boosting where in each iteration only class weights are updated rather than sample weights and apply that to a series of CNNs for object recognition tasks.",
"While the paper is comprehensive in their derivations (very similar to original boosting papers and in many cases one to one translation of derivations), it lacks addressing a few fundamental questions: - AdaBoost optimises exponential loss function via functional gradient descent in the space of weak learners.",
"It*s not clear what kind of loss function is really being optimised here.",
"It feels like it should be the same, but the tweaks applied to fix weights across all samples for a class doesn*t make it not clear what is that really gets optimised at the end.",
"- While the motivation is that classes have different complexities to learn and hence you might want each base model to focus on different classes, it is not clear why this methods should be better than normal boosting: if a class is more difficult, it*s expected that their samples will have higher weights and hence the next base model will focus more on them.",
"And crudely speaking, you can think of a class weight to be the expectation of its sample weights and you will end up in a similar setup.",
"- Choice of using large CNNs as base models for boosting isn*t appealing in practical terms, such models will give you the ability to have only a few iterations and hence you can*t achieve any convergence that often is the target of boosting models with many base learners.",
"- Experimentally, paper would benefit with better comparisons and studies: 1) state-of-the-art methods haven*t been compared against (e.g.",
"ImageNet experiment compares to 2 years old method) 2) comparisons to using normal AdaBoost on more complex methods haven*t been studied (other than the MNIST) 3) comparison to simply ensembling with random initialisations.",
"Other comments: - Paper would benefit from writing improvements to make it read better.",
"- \"simply use the weighted error function\": I don*t think this is correct, AdaBoost loss function is an exponential loss.",
"When you train the base learners, their loss functions will become weighted.",
"- \"to replace the softmax error function (used in deep learning)\": I don*t think we have softmax error function"
] | {
"criticism": 7,
"example": 2,
"importance_and_relevance": 0,
"materials_and_methods": 9,
"praise": 0,
"presentation_and_reporting": 1,
"results_and_discussion": 5,
"suggestion_and_solution": 5,
"total": 13
} | 0.538462 | 0.153846 | 0 | 0.692308 | 0 | 0.076923 | 0.384615 | 0.384615 | 13 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | 2.230769 | 2.230769 | 0 |
GeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks | The Fisher information metric is an important foundation of information geometry, wherein it allows us to approximate the local geometry of a probability distribution. Recurrent neural networks such as the Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield state-of-the-art performance on speech translation or image captioning have so far ignored the geometry of the latent embedding, that they iteratively learn. We propose the information geometric Seq2Seq (GeoSeq2Seq) network which abridges the gap between deep recurrent neural networks and information geometry. Specifically, the latent embedding offered by a recurrent network is encoded as a Fisher kernel of a parametric Gaussian Mixture Model, a formalism common in computer vision. We utilise such a network to predict the shortest routes between two nodes of a graph by learning the adjacency matrix using the GeoSeq2Seq formalism; our results show that for such a problem the probabilistic representation of the latent embedding supersedes the non-probabilistic embedding by 10-15\%. | In this paper, the authors propose to integrate the Fisher information metric with the Seq2Seq network, which abridges the gap between deep recurrent neural networks and information geometry. By considering of the information geometry of the latent embedding, the authors propose to encode the RNN feature as a Fisher kernel of a parametric Gaussian Mixture Model, which demonstrate an experimental improvements compared with the non-probabilistic embedding. The idea is interesting. However, the technical contribution is rather incremental. The authors seem to integrate some well-explored techniques, with little consideration of the specific challenges. Moreover, the experimental section is rather insufficient. The results on road network graph is not a strong support for the Seq2Seq model application. | iclr2018 | OpenReview | <think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>
| [
{
"content": "You are a member of the scientific community tasked with peer review. Review the following paper title and abstract.\n### Title + Abstract\nGeoSeq2Seq: Information Geometric Sequence-to-Sequence Networks\nThe Fisher information metric is an important foundation of information geometry, wherein it allows us to approximate the local geometry of a probability distribution. Recurrent neural networks such as the Sequence-to-Sequence (Seq2Seq) networks that have lately been used to yield state-of-the-art performance on speech translation or image captioning have so far ignored the geometry of the latent embedding, that they iteratively learn. We propose the information geometric Seq2Seq (GeoSeq2Seq) network which abridges the gap between deep recurrent neural networks and information geometry. Specifically, the latent embedding offered by a recurrent network is encoded as a Fisher kernel of a parametric Gaussian Mixture Model, a formalism common in computer vision. We utilise such a network to predict the shortest routes between two nodes of a graph by learning the adjacency matrix using the GeoSeq2Seq formalism; our results show that for such a problem the probabilistic representation of the latent embedding supersedes the non-probabilistic embedding by 10-15\\%.",
"role": "user"
},
{
"content": "<think>Okay, to do a decent peer review, I should start with summarizing what the work is about. This includes novelty, contributions, clarity, and impact. I might include suggestions to issues that I find critical to the usefulness of the work. Starting with novelty, there are different types of novelty. The first aspect of novelty is scope and scale. Is the work covering a broader perspective than previous works? The second aspect is methodology. Is the proposed methodology novel compared to previous methods? Do they provide any significant advantage over previous methodologies? The third aspect is results. Are the results different compared previous work? Does this lead to any novel theories or an extension of current theories? Next is clarity, I will be reading the work in depth and analyze if any sentences can be potentially ambiguous or misleading. If any sentences are ambiguous, I will state them and give a suggestion on how to rephrase such sentence or how to make more clear to the audience. Lastly, impact is important. I will attempt to predict how such work will impact the scientific community. This might include the applications of such work and how the work might transform the way we study a certain topic. Finally, I will might only mention issues that are critical. This is to keep the review as concise and helpful as possible.</think>\n\n\n\n In this paper, the authors propose to integrate the Fisher information metric with the Seq2Seq network, which abridges the gap between deep recurrent neural networks and information geometry. By considering of the information geometry of the latent embedding, the authors propose to encode the RNN feature as a Fisher kernel of a parametric Gaussian Mixture Model, which demonstrate an experimental improvements compared with the non-probabilistic embedding. The idea is interesting. However, the technical contribution is rather incremental. The authors seem to integrate some well-explored techniques, with little consideration of the specific challenges. Moreover, the experimental section is rather insufficient. The results on road network graph is not a strong support for the Seq2Seq model application.",
"role": "assistant"
}
]
| [
" In this paper, the authors propose to integrate the Fisher information metric with the Seq2Seq network, which abridges the gap between deep recurrent neural networks and information geometry.",
"By considering of the information geometry of the latent embedding, the authors propose to encode the RNN feature as a Fisher kernel of a parametric Gaussian Mixture Model, which demonstrate an experimental improvements compared with the non-probabilistic embedding.",
"The idea is interesting.",
"However, the technical contribution is rather incremental.",
"The authors seem to integrate some well-explored techniques, with little consideration of the specific challenges.",
"Moreover, the experimental section is rather insufficient.",
"The results on road network graph is not a strong support for the Seq2Seq model application."
] | {
"criticism": 3,
"example": 0,
"importance_and_relevance": 3,
"materials_and_methods": 5,
"praise": 2,
"presentation_and_reporting": 0,
"results_and_discussion": 1,
"suggestion_and_solution": 1,
"total": 7
} | 0.428571 | 0 | 0.428571 | 0.714286 | 0.285714 | 0 | 0.142857 | 0.142857 | 7 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 2.142857 | 1.57469 | 0.568167 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.