Input
stringlengths
251
41.6k
Output
stringlengths
137
9.7k
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: new definitions and evaluations for saliency methods introduces intrinsic evaluation metrics for saliency methods completeness and soundness that do not require additional models or human evaluation these metrics are grounded in logical proof concepts and force the method to output a saliency map that only explains the class of interest the paper proposes a maskbased saliency method that optimizes for soundness as well as completeness evaluations compare the proposed saliency method to other maskbased saliency methods on soundness and completeness deletion and insertion game metrics and saliency metric strengths intrinsic evaluation methods for saliency methods this paper proposes that saliency methods should be evaluated on completeness and soundness these attributes are grounded in logical proof systems and defined for saliency methods mathematically and textually in the paper these concepts provide a formal intrinsic framework to evaluate saliency methods without requiring human evaluations human evaluators often measure how well the saliency maps to their representations which may not always align with the models representations intrinsic evaluations are better suited to evaluating saliency methods on their ability to reflect the underlying model defining saliency method requirements the paper reframes saliency by introducing completeness and soundness as two necessary constraints for saliency methods previously we only required completeness saliency justified a models prediction by requiring completeness and soundness saliency justifies a models prediction but can not justify any other possible prediction these requirements improve the specificity of what a saliency method should output and make it more straightforward to interpret the results of a saliency method it also ensures that the saliency map for each class is distinct which can improve our ability to compare maps and draw meaningful insight between possible predictions clarity the paper is very well written and precise section 3 is straightforward to follow despite presenting complex definitions weaknesses novelty of replacement strategy the key novelty of the saliency method is its pixel replacement strategy where a new pixel value is sampled from another random image this strategy is known as hotdeck imputation where replacement values are sampled from the marginal feature distribution existing work has used hotdeck imputation as a masking strategy it has also shown hotdeck imputation and mean imputation ie grey pixel replacement result in similar changes to model outputs see what made you do this understanding blackbox decisions with sufficient input subsets by carter et al given the similarities to this work i suggest discussing it in the related work and including it in the comparison to existing metrics and methods missing related work related work on saliency evaluation methods should include model and data randomization tests from sanity checks for saliency maps by adebayo et al also consider the saliency method axioms from axiomatic attribution for deep networks by sundararajan et al sanity checks for saliency metrics by tomsett et al has a good evaluation of existing saliency evaluations lack of reproducibility the checklist indicates the paper does not include the compute details code or data please include computing details and other details needed for reproducibility if possible also release the code limited limitations section the paper does not discuss limitations understanding limitations is essential for readers who are looking to use this work please include a discussion on important considerations when using your method also please incorporate the ethical considerations in checklist 1 in the main text minor issues line 19 missing space between emdash and words line 37 and if so one and if so one line 213 procedures for find masking explanations procedures for finding masking explanations line 278 from original test set from the original test set please include a discussion of limitations some questions i had were what is the tradeoff between intrinsic and extrinsic evaluations do they both have a place in evaluating saliency methods or are intrinsic evaluations like yours always better is there a tradeoff between completeness and soundness should we optimize for both equally or is there ever a case where we should prioritize one over another can looking at completeness and soundness separately tell us anything different than looking at them together docsepthis paper presents an additional dimension soundness for evaluating saliency methods for explainable ai the authors define this concept then use it to provide both explanations for why existing heuristic methods work and to suggest new saliency methods strengths i find this to be a useful and convincing paper the paper is well written but the presentation of the concepts could be made more crisp in parts see questions below weaknessess nothing major i could see but this is somewhat outside my area the authors have adequately addressed limitations docsepthis paper presents a method for attributing saliency in the sense of determining which pixels contribute to a classification outcome to an image it does so in a novel fashion that explores the tradeoff between the notions of completeness and soundness pointing out that prior work in this domain does not address the latter the paper itself is reasonably well written albeit with some typos eg completeness is spelled wrong in some places i find that this addresses an original angle of this type of assessment of how the neural network makes its determination and in a principled way that gives it an advantage over some of its predecessors i think this is a significant result and generally view the conclusions drawn by this paper as positive there are not strong societal impacts of this work insofar as i can see and to the extent that these do exist the authors have made a good case docsepthe paper introduces and formalizes new evaluation metrics to ensure goodness of saliency methods based on the logical concepts of completeness and soundness the first ensures that the networks output is unchanged when using the masked with the saliency map input in place of the full image which is what most of current evaluations methods for saliency methods require the latter requires verifying that the same saliency method cannot be used to produce masked input that make the net output a different label and therefore ensures that the evaluation of saliency maps appropriately track the models probability of assigning labels the papers contributions are clear and significant and explained in a straightforward and accurate manner examples are significant and useful the originality of the contribution lies in connecting the context of saliency methods to logical proof systems and to formalize an evaluation approach which overcomes limitations of current methods and helps making them more rigorous and theoretically grounded a simple saliency method based on optimization is proposed which thanks of a change in the pixel replacement strategy allows to satisfy soundness at a small price in completeness this is proven to work as expected when validated on various datasets and compared to other saliency methods furthermore thanks to their formal frameworks of definitions authors provide an intrinsic justification about why methods used heuristically to improve the aspect of masks tv regularization and upsampling actually work in that they improve the soundness i think sharing code and data related to the paper would be beneficial for the scientific community ### Summary:
the paper introduces and formalizes new evaluation metrics to ensure goodness of saliency methods reviewers consensus about the paper was positive they found that the paper contributions are clear and significant and also appreciated the paper originality i therefore recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper is well written in this paper the author proposed an embased algorithm diem for set representation learning the author first provides the equivalence between the otke representation learning algorithm and a singlestep em algorithm with extra balanced assignment constraints on the estep then diem is developed and consistently outperformscompetes with otke algorithms in different empirical studies with the assistance of multiple em steps and extra regularization and diem is applicable both for supervised and unsupervised settings the paper is well written and easy to understand however i do have some comments 1 it is obvious to see that diem achieves better results than otke baseline in terms of offline evaluation metrics such as accuracy loglikelihood score and as the author mentioned the improvements come from the multiple steps em algorithms if this is the case has the runtime been increased in addition the author also mentioned that otketype methods would reduce the computational cost compared with attention settransformer based on two arguments the running time probably should be compared between different baselines 2 diem doesnt have better results than otke on the largest deepsea dataset which would influence the practical performance of diem on the largescale nlpbioinformatics tasks please refer above docsepthis paper proposes a novel set embedding method inspired by the em algorithm treating each element in a set as iid samples from a mixture of gaussians the procedure of computing pairwise similarities between the elements and prefixed set of reference vectors corresponds to the computation of responsibilities in estep for the mixture of gaussians and the embedding step using the similarities corresponds to the parameter update in mstep the previous approaches such as otke can directly be interpreted with this em view plus balanced assignment constraint based on this reinterpretation the paper proposes a novel setembedding method extending previous methods in various ways 1 use multiple steps of em updates 2 learn parameters other than reference vectors covariances and mixing proportions 3 learn the initial value of the parameters by placing prior distributions on them the resulting algorithm entitled differentiable em diem is demonstrated to excel in various settovec tasks overall i like the paper it is well written and the interpretation of the setembedding procedure as an em iteration indeed makes sense it is also good to see the authors derive a novel algorithm from their reinterpretation the experiments are diverse and thorough and as far as i can see they seem to be reproducible with all the details provided in the appendix i think the paper can be enhanced with some further clarification 1 in my opinion it is quite important to compare the number of parameters when comparing different set embedding methods for instance in lee et al 2019 they set the number of parameters for deepsets and set transformers roughly the same how many parameters were used for the proposed method i hope to see the parameter counts at least in the appendix it would also be helpful to compare the wallclock time for the forward passes especially for the proposed method it is worth checking the inference time wrt the number of em iterations k 2 there are quite a few hyperparameters or options for the proposed model the number of mixture components p number of em iterations k prior hyperparameter tau and the way of pooling pc sb or sb2 judging from the appendix the performance of the proposed approach is quite sensitive to the choice of these hyperparameters im also quite confused with three options for the pooling is there any guide for which one to choose was any of those three pooling methods dominant in general it is quite hard to directly compare the effect of individual choices of the hyperparameters because the results so far is not controlled experiments for the hyperparameters does the performance generally saturate with the number of mixture components p or the number of em steps k 3 have you considered using generative models other than a mixture of gaussians i guess the primary reason for the choice is its conjugacy but probably we can think of other conjugate pairs for the mixture components 4 collapsing the hyperparameters tau eta1 lambda 1 nu d 2 is weird for instance nu d 2 cannot be equal to one can you elaborate on this 5 how important is the step to initialize the parameters as the mode of the posteriors what happens with the randomly initialized parameters or learning them as well with gradient descent for instance if the mixture components are not conjugate so the map parameters are not easily estimated then we may consider different options the paper proposes an interesting idea and the experimental results are promising there are some minor concerns to be clarified docsepthis paper discusses that optimal transport kernel embedding otke can be regarded as a single expectationmaximization em step towards the maximum likelihood estimate of gaussian mixture models under mild conditions motivated by the finding this paper proposes differentiable em which can be regarded as a generalized version of otke with prior and several em steps experiments on omniglot unique character counting amortized clustering in cifar100 protein fold classification on scop 175 sentiment classification on sst2 and chromatin profile detection on deepsea demonstrate the effectiveness of differentiable em on set representation learning strengths 1 the connections between otke and em is insightful in set representation learning and differentiable em is well motivated 2 experimental results are impressive and support the claims made in this paper well weakness 1 time complexity or empirical wallclock time is needed to give a thorough analysis of differentiable em it will be helpful to present the time complexity or empirical wallclock time of differentiable em since it takes several em steps and costs more time compared to otke this paper presents a novel idea about set representation learning experiments cover multiple tasks and support the claims well though more analysis on time complexity is needed i think this paper is above the acceptance threshold docsepthis work proposes a new embedding for sets of features an important problem since many data modalities can be seen as such images sentences etc more precisely a set is represented by the output means of an em algorithm for fitting the input set with a mixture of gaussians the authors draw a new connection to an existing method for set embedding otke moreover their method achieves good experimental results pros this work introduces a principled method for representing sets the otke method is derived in a principled manner an interesting consequence is that the choice of the number of reference can be made using the existing litterature of mixture fitting good experimental results on varied datasets nlp bioinformatics vision synthetic sensitivity studies for different hyperparameters cons the proposed method may somehow lack of novelty since the idea of using prototypes has been very studied recently questions and remarks what is the intuition of doing multiple em steps in terms of embedding can this be related to the recent perceiver 1 architecture what is your view on this does diem learn the parameters of the prior distribution in the supervised setting this could be more clear in the paper the paper claims that the method has low computational complexity but it seems that this claim is not detailed in the paper apart from remarks on the number of prototypes could you elaborate on the complexity of the em steps it could be great to provide more details on how to set the hyperparameters for your method could you further discuss the impact of the prior depending on the task could we inject another priorinductive bias here features given by protein language models such as esm 2 can greatly improve results for scop 175 in fact this may be the actual stateoftheart for this dataset see table 5 in otke paper transfer learning is however orthogonal to the method proposed here but it is worth having this in mind in the related work the limitation was found could you elaborate on this 1 perceiver general perception with iterative attention andrew jaegle and felix gimeno and andrew brock and andrew zisserman and oriol vinyals and joao carreira 2 biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences rives alexander and meier joshua and sercu tom and goyal siddharth and lin zeming and guo demi and ott myle and zitnick c lawrence and ma jerry and fergus rob the paper seems sound and provides new insights for set representation with convincing experiments i tend to recommend acceptance but it would be great if the authors could answer my questions ### Summary:
this work proposes a new embedding for sets of features a set is represented by the output means of an em algorithm for fitting the input set with a mixture of gaussians the authors draw a new connection to an existing method for set embedding otke moreover their method achieves good experimental results there is general consensus among the reviewers that the paper is sound wellwritten and provides new insights for set representation with convincing experiments the authors have answered to most comments raised by the reviewers and have revised the paper accordingly i recommend acceptance as a poster
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper considers that spiking neural networks are not suitable for traditional adversarial robustness analysis methods and proposes linear relaxations for the membrane potential and spikes of snns these relaxations can be used to provide robust training for snns pros this paper is well written this paper gives a linear relaxation scheme for snns this solution takes into account not only temporal updates but also spatial updates cons im not sure if the linear relaxation is too loose i think an example of mnist can be used to illustrate the gap the upper and lower bounding strategies used for spike inputs in sec 32 are probabilistic not linear can the introduction of probability be consistent with linear relaxation if not ie xuxix how will the model work previous work has shown the robustness of snn is affected by coding poisson coding and directly coding 1 whats the way of coding in your paper there are some typos that need to be fixed and the citation is not standardized for example leaky integratedandfire lif gerstner et al 2014line 56should be leaky integratedandfire lif gerstner et al 2014 1 hiresnn harnessing the inherent robustness of energyefficient deep spiking neural networks by training with crafted input noise yes docsepthe paper proposes a methodology to tackle adversarial robustness in snns the authors show that their method is able to resist attacks of different types on small scale datasets the authors evaluation is pretty limited in 1 3 the authors show that bntt trained snn models are inherently more robust can the authors comment on how their methodology is different from 13 further the authors started their paper discussion with snns being advatangeous on hardware so it makes more sense to develop hardware aware robustness but the authors method is algorithmbased can the authors comment if their robustness will transfer to hadware as is or any modification will be required in 4 the authors show that adversarial robustness on hardware become pretty low so they come up with a normalization technique to resist attacks in 2 the authors show that the type of coding technique plays a role in determining adversarial robustness i am not sure if teh authors methodology can tarnsfer across differnet coding techniques 1 revisiting batch normalization for training lowlatency deep spiking neural networks from scratch y kim p panda frontiers in neuroscience 1638 2 rate coding or direct coding which one is better for accurate robust and energyefficient spiking neural networks y kim h park a moitra a bhattacharjee y venkatesha p panda icassp 20222022 3 visual explanations from spiking neural networks using interspike intervals y kim p panda scientific reports 11 article number 19037 2021 4 bhattacharjee abhiroop et al examining the robustness of spiking neural networks on nonideal memristive crossbars arxiv preprint arxiv220609599 2022 please see weakness section docsepin this paper a robust training method for snns is proposed it is based on sibp and scrown algorithms the results on 3 different datasets show attack error reduction with some original accuracy loss strengths 1 the contributions of this paper are clear and original 2 the achieved results are significant and relevant to advancing the stateoftheart weaknesses 1 there are several typos and semantically incorrect sentences throughout the text it is recommended to conduct thorough proofreading 2 the clarity of some key sections can be improved see comments below the limitations and societal impact of this work have not been discussed however there is no reason to penalize the submission for this since this work provides only positive impacts ### Summary:
this paper applies existing certificationbased adversarial robustness techniques to spiking neural networks they achieve this through upper and lower relaxations of the spiking equations review scores were high variance ranging from 4 through 8 reviews were generally of high quality the largest concern was that the use of rate coding for the networks output limited the applicability of the technique i found the authors response to this concern satisfying i appreciate that this paper is the first to apply certificationbased techniques to spiking neural networks i believe it has the potential to produce significant impact for that reason based upon the reviews and my judgement of the potential impact i recommend the paper be accepted
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this paper proposes a framework hyperdynamics that takes in observations of how the environment changes when applying rounds of interactions and then generates parameters to help a learningbased dynamics model quickly adapt to new environments the framework consists of three modules an encoding module that maps the observation of a few agentenvironment interactions into a latent feature vector a hypernetwork that conditions on the latent vector and generates all parameters of a dynamics model dedicated to the observed system and a target dynamics model constructed using the generated parameters that predicts the future state by taking the current system state and the input action as input the authors evaluate the framework in a series of object pushing and locomotion tasks they have shown that a single hyerdynamics model allows fewshot adaptation to new environments outperforming several baselines while maintaining a performance that is on par with a set of models trained separately for each environment strengths this paper targets an important question of building a more generalizable dynamics model that can perform online adaptation to environments with different physical properties and scenarios that are not seen during training the authors have evaluated the method in several object pushing and robot locomotion tasks and shown superior performance over baselines that uses recurrent state representations or gradientbased metaoptimization many practical treatments used in the pipeline can be good references for the community to learn from eg how to encode object information in 3d specific representation of the object orientation and the use of geometryaware recurrent networks grnns to learn 3d feature grids etc weaknesses although i like the idea of this paper i believe the authors should provide more clarification and illustration of the experimental results to solidify the claims in the paper 1 what are the objects used in the pushing task the authors claim that their dataset consists of only 31 different object meshes with distinct shapes it is important to include images of the objects to give the readers a better understanding of how diverse the dataset is and how different the geometry of the seen and novel objects are this can help the readers better appreciate the generalization ability of the proposed method 2 it would be great if the authors can include some qualitative examples eg video to show the performance of the method purely from the numbers in the tables it is hard for the readers to imagine how well the proposed approach solves the tasks 3 it would make the paper more illustrative if the authors can include some analysis and visualization of the learned representations in the middle of the network for example how are the latent embeddings different for different objects are there any correlations between the embeddings and the actual physical properties how do the interactions affect the embedding will different interaction sequences result in the same embedding how do the learned representations from geometryaware recurrent networks grnns look like the authors claim that it can complete missing or occluded shape information can the authors provide some concrete evidence supporting this claim in the specific scenarios used in this paper how do different numbers of interactions affect the quality of the representation 4 how does evis detect the objects in the scene are these detections in 2d or 3d how accurate is the detection algorithm 5 the beginning of section 31 describes that an objects orientation is represented as a quaternion however at the end of section 31 the authors suggest that they discard the orientation information from states fed into the generated dynamics model this seems to me makes the state an incomplete representation of the environment where the authors only predict the position of the object which makes me wonder how does the model encode the geometry of the object will the missing of the orientation information introduce any ambiguities or uncertainties what if the object is reoriented it may be better to include comparisons of different state representations also in section 33 the authors suggest that they update the orientation using quaternion composition which seems to be inconsistent with what has been described before havent the model already discarded the orientation information other comments this paper only shows experiments in the simulation im curious are there any gaps before applying the method to the real world and what are these gaps for example how long does the model take to optimize the action trajectories when performing mpc can it support realtime feedback control in real physical scenarios especially when the environment is dynamic modelpredictive control relies on the environments feedback to correct the action sequences which can achieve a good control performance while tolerating a larger longterm prediction error in your experiments how important is the accuracy of the dynamics model in other words even if some baselines have a poorer forward prediction performance will mpc be able to bridge some of the performance gaps in table 3 why are there multiple red numbers in the antslope columns typo page 4 section 31 eint then maps zint to a 1dimensional code zint in r2 this sentence seems weird how does eint map zint to its own why does a 1dimensional code lies in a 2d space post rebuttal the authors response and the revisions to the manuscript have greatly improved the quality and clarity of the paper most of my major concerns regarding the implementation and evaluation details have been sufficiently addressed hence i decide to increase the score from 5 marginally below acceptance threshold to 6 marginally above acceptance thresholddocsep summary this paper proposes an adaptive dynamics model based on the idea of hypernetworks it is demonstrated that this approach compares favorably to other ways of adapting dynamics models such as conditioning on a separate feature input and meta learning by gradientbased model updates the proposed approach is evaluated on pushing and locomotion tasks pros the proposed approach for conditioning dynamics models on rollouts to model systemspecific properties using the hypernetworks idea seems novel and is interesting paper is clearly written the provided figures help understanding outperforms stateoftheart adaptive dynamics modeling approaches nagabandi et al 2019 sanchezgonzalez et al 2018b reasonable baselines are used for comparison such as fixed model xyz input feature conditioning direct expert ensemble and stateoftheart adaptive dynamics models cons the paper does not explain training details for the architecture sufficiently well how are the network components trained especially the visual recognition part for object pushing what kind of supervision with ground truth is required to train the components for instance for object detection and shape representation are components pretrained and how which lossesdata are used for training its unclear why moving from a canonical to an oriented shape representation in sec 31 should improve results shouldnt this limit generalization and require more training data giving standard deviations in addition to the average values in table 13 would complete the numerical results sec 1 why is planet hafner et al 2019 listed as no adaptation although it contains a recurrent state representation it appears magical that the approach performs better on novel than on seen objects during training for cheetahslope or antslope in table 3 please discuss recommendation the paper reads well and proposes an interesting novel approach which could deserve acceptance the paper should address the points raised in paper weaknesses questions for rebuttal please address points raised above in weaknesses typos p2 they are are p4 eint then maps zint to a 1dimensional code zint r2 shouldnt this be eint maps interactions to 2dimensional code zint r2 p4 which is typically comprised of an agent and its external environment which typically comprises which is typically composed of table 1 motion rediction error motion prediction error advice in figure 1 the concatenation symbol is slightly misleading as it could be interpreted as elementwise multiplication maybe replace it by cdotcdot postrebuttal comments the authors comments addressed my concerns on method and experimental details mostly well i keep with my rating 6 marginally above acceptance threshold docsep update thank you for your detailed response the newly added clarifications and sanity checks have greatly improved the quality of the paper and i am therefore increasing my score from 4 to 6 i believe the model capacity comparison table 6 is especially important for demonstrating the value of the new architecture and would recommend mentioning that result in the main paper original review the paper proposes a model for predicting the dynamics of a physical system based on hypernetworks given some observed interactions and some visual input the hypernetwork outputs the parameters of a dynamics model which then predicts the evolution of the systems state over time experiments are conducted on an object pushing and a locomotion task strengths 1 the paper addresses an important question namely how a dynamics model may adapt to environments that dont fully match its training distribution 2 the proposed use of a hypernetwork is plausible and novel to my knowledge 3 the related work section appears comprehensive and to my knowledge does not miss any major prior work weaknesses 1 the main claim of the paper is that hyperdynamics network offers better prediction accuracy and generalization than a standard dynamics model i feel like the evaluation of this question is confounded by the choice of tasks and baselines on the pushing benchmark the xyz vf and densephysnet operate on different modalities than hyperdynamics either no state information or no visual information and are therefore difficult to compare for the mbmaml baseline this is not specified the expertens model cannot be expected to generalize since it is designed to overfit on individual objects as a result only the direct baseline clearly operates in the same experimental regime as hyperdynamics however nothing is reported on the model architecture or the training method for that baseline raising the question if its model capacity was competitive my impression is that this experimental design blurs the effects of a using sideinformation to infer system properties and b utilizing such information through a hypernetwork as opposed to a standard dynamics predictor if the goal is to evaluate the new architecture these should be disentangled 2 on the locomotion benchmark the recurrent baseline is similarly unclear sanchezgonzalez et al is cited but that paper focusses on comparing recurrent models based on graph networks to those based on mlps and it is unclear which model was used 3 no results are reported for the prediction accuracy on the locomotion task which would have helped evaluate the performance of the dynamics models more directly than the task scores 4 many of these issues could have been avoided by testing on established benchmarks from the literature for which results are available if there is a simulator available generalization ability could still have been tested by varying the physical constants of the dataset 5 the paper contains a decent amount of typos and grammatical errors overall while the paper presents an interesting idea the experimental evaluation is not convincing in its current state baseline architectures are not fully specified many of them did not receive the same input and no benchmark task with previously reported results has been used as a result i recommend rejection at this time questions 1 in eq 1 should omega be a parameter of h instead of f 2 section 31 introduces the 1dimensional code zint in mathbbr2 so is it one or twodimensional 3 overall the dimensionality of the latent codes and hidden layers seems incredibly small eg only 12 numbers to encode prior interactions and 8 to encode shape is there really no benefit to using higher capacity modelsdocsep summary the authors present hyperdynamics a novel method for systemidentification and learning of flexible forward models that can be used in planning tasks the presented methods is generic and is shown on both locomotion and pushing tasks with different simulated robots i enjoyed reading this work a lot and i hope it gets accepted its a clever idea and most flaws that im about to point out are easily addressable by the authors strengths weaknesses strengths 1 the method is generic shown to work across tasks and environments 2 the baselines are strong when i started reading your paper i thought that densephysnet and some form of maml would be good candidates for this to compare again and it turns out these were indeed included 3 figure 1 caption as well as the introduction to section 3 do a great job at introducing the architecture in a way that would allow the reader to create a basic implementation 4 code was included i didnt run it but its clean and seems functional from what i can tell weaknesses 1 you really really need to be more clear in the main paper on the implementation details you cant move the amount of training data to the appendix and its not good practice to only include the network architecture by name in the main paper and what are all your losses you make the method looks super simple but then you train on shapenet some 2d reconstruction something about cropping and theres a gru in there too full backprop vs truncated backprop the appendix shines a little light on this but you need to be way more specific in the main paper that has to stand on its own 2 you dont motivate all the nittygritty implementation choices why did you add the decoder whats the performance if you remove it what about the cropping what if you dont do an objectcentric feature map but instead a few cnn layers what are the individual contributions of all these details 3 densephysnet visual foresight and many other works in this domain use simple realrobot experiments to demonstrate that their method can handle realistic robot noise obviously theres a global pandemic happening at the moment so i wont require you adding this for the rebuttal but i think in order to really establish this method maybe before putting it on arxiv youd have to add some realrobot experiments this can be as simple as a 180usd realsense and a 500usd robot arm plus a few objects and a playfield its become a standard for systemidentificationstyle works and its justified in my opinion since your method isnt inherently useful in simulation where the user has access to all the information and can arbitrarily resetreposition the model and since you dont have shapenet data for many realworld objects which you seem to need for pretraining could you at least add a sentence or two detailing how this would transfer to realworld problems tldr my main requests 2 motivate implementation details add ablations if you have any and 1 be more explicit about them in the main part of the paper impact recommendation despite that there seem to be a lot of hacks that make this method in the specific settings i think the general idea behind it is sound and i think the authors show that it performs better than the sota at least in simulation therefore id recommend acceptance given that the authors add the requested information in its current shape its a 6 for me but if my main concerns are addressed im happy to up this to a 7 or if major improvements are made and my questions below are answered to an 8 questions nitpicks comments kudos for not making another acronym method there are a lot of typos and orthographic errors would recommend a spellchecked or getting this proofed examples section 2 poinclouds section 2 properties in hand properties at hand maybe start the introduction with an example eg how children are able to chew on a block of wood to assess its hardness and then build towers with it important introduction when you go over iiv that feels a bit too long and litreviewy and misplaced in the introduction i would recommend the following changes a trim this severely only mention that there are modelbased methods that usually do only one environment and theres metalearning and how your method is more adaptable than either b move this into the literature section where you have to come back to it anyway c move the hypernetworks section from the literature into a separate background section and develop it a bit further since its less competing method and more you should know about this to understand out method also in the introduction you present iiv and you mention how your method is betterdifferent than iiii but you never address iv its become a standard to summarize the contributions again at the end of the introduction ideally as bullet points please add these in equation 1 why is the ordering otn for the sums i feel like ont would be more natural no when reading the method my main question was if the method would work on dense trajectories or on beforeandafter photos like densephysnet this is only answered a few pages later but i think this belongs in 3overview or 31 just to be clear youre gathering trajectories of length 4s ie 5 frames of 800ms where you do not retract the robot arm when pushing right compared to densephysnet where the arm is never visible because they take photos before and after complete standstill if thats the case how do you deal with occlusion from the arm do you encourage objectobject interactions in any way or do they just occur randomly or do you only ever experiment with single objects the object orientation vs state section isnt super clear youre subtracting an objects absolute starting positionorientation from its future trajectory points in 32 why a gru why not lstm why k16 and similarly why k5 this ties into the main criticism from above please motivate your choices in 41 i think this is a typo but it says you added beds to your experiment table i think theyd be a bit too large no d 41 specify the random massfriction range please 41 same with the total amount of training dataframes and since you wont have shapenet 42 i think its a halfcheetah not a cheetah 42 i dont understand why its unrealistic to assume arbitrary resetting in simulation thats one of the benefits of running simulations and common practice 5 what do you mean predicting both the structure and parameters of the target dynamics model parameters is clear mass friction etc but whats the structure here ### Summary:
this paper proposes hyperdynamics a framework that takes into account the history of an agents recent interactions with the environment to predict physical parameters such as mass and friction these parameters are fed into a forward dynamics model represented as a neural network that is used for control pros addresses an important problem adapting dynamics models to new environments and provides strong baselines well written and authors have improved clarity even further based on reviewers comments cons i agree with the reviewer that it is currently unclear how well this will transfer to the real world the idea of predicting physical parameters from a history of environment interactions is not not novel in itself although the proposed framework is as far as i know the authors should include related work along the lines of 1 this is just one paper that comes to mind others exist 1 preparing for the unknown learning a universal policy with online system identification
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies the knn algorithm when applied to multiclass classification with a few samples the authors develop algorithms by formulating a distributionally robust variant of knn where each nearest neighbor is weighted based on least favorable distribution the paper is wellwritten and the proposed algorithm is supported by theoretical results and empirical evaluation however it lacks novelty and empirical evaluation does not convey the superior performance of the proposed method for example the authors discuss the success of metric learning in knn but exclude it in the experiments i understand that in small perclass setting the similar and dissimilar sets for learning the distance metric would be very imbalance but it would interesting to see to what extent the proposed algorithm improves upon it by making tweaks to metric learning such as hard negative sampling etc to overcome the imbalance ness issue overall while the paper is very wellwritten and enjoyable to read the lack of novelty and aforementioned issue on empirical evaluations prevents me from giving it a high score the paper lacks novelty and empirical evaluation does not strongly support the superior performance of the proposed method docsepthis paper proposes a distributionally robust version of knearest neighbors knn classifier that can perform well in a smallsample regime especially for a multiclass setting the authors propose to consider a minimax optimization problem for a distributionally robust classification and show that this infinitedimensional problem can be indeed solved by a finitedimensional convex problem its connection to the lipschitz regularization framework is also established they then propose the dr knn algorithm and show that it can be seamlessly used with learning neural features jointly the experiments show that the proposed algorithm can beat the existing baselines as well as other neural network based approaches this is a solid and wellwritten paper the mathematical formulation and the technical results are well motivated and very elegant based on the convex optimization theory the presentation is also very clear except that the notation is a bit heavy the experiments are well designed and executed to corroborate the power of the theoretical framework weaknesses are hard to find it is indicated in checklist that limitations are mentioned in section 6 but i cannot find any is there any limitation of this framework docsepthe authors take the generalization of the knn method for the multilabel classification problem which lifts the samples to feature spaces and replaces the distance weights with more general weight functions the distributionally robust formulation of this wellknown generalization is defined and shown to be equivalent to a much simpler problem when the ambiguity sets comprise wasserstein balls thanks to this equivalence the authors show that the worstcase distributions are characterized by the solution of a convex optimization problem there is further a solution algorithm proposed and thanks to this the authors compare the performance of wasserstein dro weighted knn with benchmark algorithms on wellknown classification datasets strengths the paper is written extremely well it is very easy and fun to follow the motivation is clear the proofs are correct and they follow a modern set of techniques the numerical experiments are very thorough and interesting weaknesses there are some missing discussions about the wasserstein dro side of the paper especially recently there is a strong focus on the structure of the worstcase distributions finite sample guarantees and asymptotic consistencies these are not mentioned in this paper and except for defining and solving the problem there is not much focus on the properties that come thanks to the wasserstein formulation the ambiguity sets that said the authors call those uncertainty sets which i believe should be named ambiguity sets are restricted to distributions supported on training points i have never seen this and this may be a dangerous approach i would like to see more discussions on this if i am wrong then seeing further references would be great further details are in the limitations section overall i am positive about the paper i would like to clarify the questions i asked above as well as the weaknesses mentioned my biggest concern or question that i would like to clarify is that the authors constrain the ambiguity sets to include distributions that are supported only on the training instances in general in most wasserstein classification settings the most useful results are thanks to the fact that we do not have such constraints it can be seen from the literature that the worstcase distributions are typically supported on at most n1 atoms please also check if this holds here characterized by a weighted mixture of training points as well as a point that is extremely far away from the training points though with a negligibly small weight this is how the wasserstein methods coincide with regularization techniques would it be possible for the authors to compare their method with a bruteforce method that solves the wasserstein dro problem where the balls support points are unconstrained i am also wondering whether the equivalence between 6 and 8 works because of such an assumption if my concern is not valid i would appreciate an explanation from the authors docsepthis paper aims to developing a distributionally robust knn classifier for multiclass few shot scenario to mitigate those weaknesses of existing similar methods it essentially learns the classdependent metrics to build corresponding optimal weighted knn classifiers sodesigned algorithm dr knn is able to hedge against feature uncertaintiessoreported comparison results show relatively favorable performance to sotas strengths 1for multiclass few shot metric learning the authors develop the optimal weighted knn classifiers by using the proposed dr knn algorithm to optimize a defined distributionally robust formulation including the classdependent weights in classification 2theoretically proving the formulation equivalent to a lipschitz norm regularization problem and analyzing a few properties to justify their algorithm 3empirically confirming the proposed algorithm to have competitive performance compared to the sotas in the same setting with various realdata sets weaknesses 1the proposed formulation lacks a sufficient clarification about the uniqueness including essential difference in principle from existing dro formulation 2the problem under study involves distributional robustness and metric learning thus the authors should not overlook some existing works in the two aspects at least being mentioned to make differences especially those appeared in 2021 and 2022 3the assumption among classes is not practice though the formulation or definition in this manu is somewhat trivial but its highlight lies in optimization and theoretical property analysis from which some conclusions or insights can be gained ### Summary:
the reviewers conclude on an interesting paper especially pjnq with substantial results that justify its acceptance i can only recommend to include all of the discussion parts in the camera ready version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a maximally expressive parametersharing scheme for hypergraphs and in general when modeling the high order interactions between elements of a set this setting is further generalized to multiple sets the paper shows that the number of free parameters in invariant and equivariant layers corresponds to the different partitioning of the indexset of input and output tensors experimental results suggest that the proposed layer can outperform existing methods in supervised learning with graphs the paper presents a comprehensive generalization of a recently proposed model for interaction across sets to the setting where some of these sets are identical this is particularly useful and important due to its applications to graphs and hypergraphs as demonstrated in experiments overall i enjoyed reading the paper my only concern is the experiments 1 some of the benchmark datasets for the proposed task as well as some wellknown methods see battaglia et al18 and references in there are missing 2 applying the model of hartford et al18 to problems where interacting sets are identical is similar to applying convolution layer to a feature vector that is not equivariant to translation in both cases the equivariance group of data is a strict subgroup of the equivariance of the layer do you agree that for this reason all the experiments on the synthetic dataset is flawed docsepgiven a graph g of n vertices the activations at each level of a graph neural network gnn for g can be arranged in an nk tensor t for some k a fundamental criterion is that this tensor must be equivariant to permutations of the vertices of g in the sense of each index of of t being permuted simultaneously this paper enumerates the set of all linear maps that satisfy this criterion ie all linear maps which the authors claim can serve as the analog of convolution in equivariant gnns the authors find that for invariant neural networks such maps span a space of dimension just bk whereas for equivariant neural networks they span a space of dimension b2k the proof of this result is simple but elegant it hinges on the fact that the set of tensor elements of the same equality type is both closed and transitive under the permutation action therefore the dimensionality of the subspace in question is just the number of different identity types ie partitions of either 1k or 12k depending on whether we are talking about invariance or equivariance my problem with the paper is that the authors model of gnns doesnt actually map to what is used in practice or what is interesting and useful let me list my reservations in increasing order of significance 1 the authors claim that they give a full characterization of equivariant layers this is not true equivariance means that there is some action of the symmetric group sn on each layer and wrt these actions the network is equivariant collecting all the activations of a given layer together into a single object l this means that l is transformed according to some representation of sn such a representation can always be reduced into a direct sum of the irreducible representations of sn the authors only consider the case then the representation is the kth power of the permutation representation technically called the defining representation of the sn this corresponds to a specific choice of irreducibles and is not the most general case in fact this is not an unnatural choice and all gnns that i know follow this route nonetheless technically saying that they consider all possible equivariant networks is not correct 2 the paper does not discuss what happens when the input tensor is symmetric on the surface this might seem like a strength since it just means that they can consider the more general case of undirected graphs although they should really say so in reality when considering higher order activations it is very misleading because it leads to a massive overcounting of the dimensionality of the space of convolutions in the case of k2 for example the dimensionality for undirected graphs is probably closer to 5 than 15 for example i didnt count 3 finally and critically in actual gnns the aggregation operation in each layer is not linear in the sense that it involves a product of the activations of the previous layer with the adjacency matrix messages might be linear but they are only propagated along the edges of the graph in most cases this is motivated by making some reference to the geometric meaning of convolution the weisfeilerlehman algorithm or message passing in graphical models in any case it is critical that the graph topology be reintroduced into the network at each layer the algebraic way to see it is that each layer must mix the information from the vertices edges hyperedges etc the model in this paper could only aggregated edge information at the vertices vertex information could not be broadcast to neighboring vertices again the elemenary step of collecting vertex information from the neighbors but only the neighbors cannot be realized in this model therefore i feel that the model used in this paper is rather uninteresting and irrelevant for practical purposes if the authors disagree i would encourage them to explicitly write down how they think the model can replicate one of the standard message passing networks it is apparent from the 15 operations listed on page 11 that they have nothing to do with the graph topology at all minor gripes i wouldnt call 3 and 4 fixed point equations thats usually used in dynamical systems here there is an entire subspace fixed by all permutations below 1 they probably mean that up to permutation veclveclt docsepthis paper explores maximally expressive linear layers for jointly exchangeable data and in doing so presents a surprisingly expressive model i have given it a strong accept because the paper takes a very wellstudied area convolutions on graphs and manages to find a far more expressive model in terms of numbers of parameters than what was previously known by carefully exploring the implications of the equivariance assumptions implied by graph data the result is particularly interesting because the same question was asked about exchangeable matrices instead of jointly exchangeable matrices by hartford et al 2018 which lead to a model with 4 bases instead of the 15 bases in this model so the additional assumption of joint exchangeability ie that any permutations applied to rows of a matrix must also be applied to columns or equivalently the indices of the rows and columns of a matrix refer to the same items nodes gives far more flexibility but without losing anything with respect to the hartford et al result because it can be recovered using a bipartite graph construction described below so we have a case where an additional assumption is both useful in that it allows for the definition of a more flexible model and benign because it doesnt prevent the layer from being used on the data explored in hartford et al i only have a couple of concerns 1 i would have liked to see more discussion about why the two results differ to give readers intuition about where the extra flexibility comes from the additional parameters of this paper come from having parameters associated with the diagonal intuitively self edges get treated differently to other edges and having parameters for the transpose of the matrix intuitively incoming edges are different to outgoing edges neither of these assumptions apply in the exchangeable setting where the matrix may not be square so the diagonal and transpose cant be used because these differences arent explained the synthetic tasks in the experimental section make this approach look artificially good in comparison to hartford et al the tasks are explicitly designed to exploit these additional parameters so framing the synthetic experiments as here are some simple functions for which we would need the additional parameters that we define makes sense but arguing that hartford et al fail approximating rather simple functions page 7 is misleading because the functions are precisely the functions on which you would expect hartford et al to fail because its designed for a different setting 2 those more familiar of the graph convolution literature will be more familiar with gcn kipf et al 2016 graphsage hamilton et al 2017 monti et al 2017 etc most of these approaches are more restricted version of this work hartford et al so we wouldnt expect them to perform any differently from the hartford et al baseline on the synthetic dataset but including them will strengthen the authors argument in favour of the work i would have also liked to see a comparison to these methods in the the classification results 3 appendix a the 6 parameters for the symmetric case with zero diagonal reduces to the same 4 parameters from hartford et al if we constrained the diagonal to be zero in the output as well as the input this is the case when you map an exchangeable matrix into a jointly exchangeable matrix by representing it as a bipartite graph 0 x xt 0 so the two results coincide for the exchangeable case might be worth pointing this out ### Summary:
the paper provides a comprehensive study and generalisations of previous results on linear permutation invariant and equivariant operators layers for the case of hypergraph data on multiple node sets reviewers indicate that the paper makes a particularly interesting and important contribution with applications to graphs and hypergraphs as demonstrated in experiments a concern was raised that the paper could be overstating its scope a point is that the model might not actually give a complete characterization since the analysis considers permutation action only the authors have rephrased the claim following comments of the reviewer the authors have also revised the paper to include a discussion of how the model is capable of approximating message passing networks two referees give the paper a strong support one referee considers the paper ok but not good enough the authors have made convincing efforts to improve issues and address the concerns
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this submission proposes an ensemble framework to improve learning disentangled representations with variational autoencoders vaes the approach builds on the assumption that entangled latent representations learned by vaes show some uniqueness in their latent space structure while disentangled representations exhibit some similarity an assumption corroborated by recent studies on that basis a vae ensemble approach is proposed where several vaes are connected through linear mappings between the individual latent spaces to encourage alignment of latent representations and thus disentanglement a formal derivation of the framework is provided and the formal validity of the underlying assumption demonstrated furthermore empirical evaluation of the proposed approach in comparison to the standard vae betavae and factorvae on the datasets dsprites main results main text and celeba appendix is performed yielding improved results on the factorvae disentanglement metric all baseline methods considered as well as the distance to orthogonality dto metric only standard vae considered strengths significance novelty the proposed approach builds on recent work by rolinek et al and duan et al which show pcalike behaviour in vaes and leverage these results to develop disentanglement scores for model selection this submission uses these insights for training an ensemble of vaes in order to improve learning of disentangled representations the claim is validated both formally as well as empirically on a benchmark dataset dsprites and stateoftheart methods like factorvae where the proposed framework performs favourably to my knowledge the proposed idea is novel and simple yet potentially quite powerful this approach could be relevant for other disentanglement methods and a wider audience employing vae approaches technical quality an important contribution of this paper is the thorough formal derivation and theoretical justification of the approach which to me appears sound the experimental evaluation is welldesigned and mostly succeeds in justifying the claims with some exceptions outlined below i believe that all the relevant details to reproduce the results are provided in particular the results that the dto comes close to 0 fig 2 for the ensemble approach illustrate that the latent representations of the different vae in the ensemble converge question 1 ie the linear transformations between latent space converge to signed permutations this means that it should not matter which latent representations in the ensemble is studied in the paper the first model in the ensemble is chosen lines 274275 however i am curious whether the authors considered the results polarisation and factorvae scores for other latent representations ie not the first model and how much the results agreed clarity i consider this paper wellwritten and wellstructured relevant details and formal justifications are provided in an appropriate manner resulting in a selfcontained paper weaknesses the ensemble approach comes at a cost which is probably the reason why only up to 5 parallel models were used can the authors comment on the running time and memory requirements compared to the competing methods i think the quality of the paper could be improved if these details and the restrictions of the ensemble approach were provided the results in table 1 comparison of baseline methods and ensemble approach wrt factorvae metric show that an ensemble of size 3 can outperform stateoftheart methods like factorvae on the considered factorvae metric however they also show that it might not always be beneficial to put more weight onto enforcing aligned latent representations for the same ensemble size gamma 1 this is a bit at odds with the premise of the paper as the discussion points out question 3 lines 285289 this could be due to balancing different contributions in the more extensive objective function however this could also hint at potential optimisation problems for more challenging tasks the examples for the latent traversal in the appendix are slightly less convincing and a comparison is only done wrt a standard vae however it would be much more insightful to compare the ensemble approach to betavae and factorvae latent traversal results similar to the last point in figure 2 it would be quite insightful to see the dto results for the betavae and especially the factorvae in my opinion this is a crucial aspect which so far is missing and could justify the approach even more isnt the whole motivation that betavae and factorvae should perform slightly worse wrt dto additional feedback figure 1 i like the illustration however i do not understand the bar plot vae betavae factorvae vae ensemble maybe an additional annotation could help line 8 sometime sometimes line 24 stateofthearts stateoftheart line 25 deploy variational autoencoder deploy the variational autoencoder or deploy variational autoencoders line 37 line 190 line 221 on contrary on the contrary line 74 closely approximate prior closely approximate the prior line 127 models model line 164 decomposition l2 term decomposition the l2 term line 224 such gap such a gap line 225 such case such a case line 233 does vae ensemble improves does the vae ensemble improve recommendation this submission was an enjoyable read it provides some new insights and i believe this paper can pose an important contribution in areas which are concerned with learning disentangled representations and vae models in my opinion the claims of the paper are justified both theoretically and empirically however there are certain aspects and concerns outlined above which need to be addressed adequately to warrant a publication at the moment i am inclined to accept the paper but i would like the authors to clarify the concerns and questions above postrebuttal i would like to thank the authors for the insightful rebuttal the authors were able to address my concerns adequately and i believe that the revision improved the quality of the paper quite a bit therefore i stand with my initial recommendation and due to the reasons stated above i endorse accepting this paper references rolinek et al variational autoencoders pursue pca directions by accident cvpr 2019 duan et al unsupervised model selection for variational disentangled representation learning iclr 2019docsepthis paper proposes a simple and effective technique to improve disentanglement by coupling the latent spaces of different vae models it builds on duan et al 2019s proposed method to rank the representations of different models by learning a vae ensemble with linear transformations between the latent spaces and an additional crossmodel reconstruction loss the authors show that they can achieve significantly better disentangling strengths the theoretical justification seems reasonable and builds on previous work the experiments are organized to answer three meaningful questions the results do suggest the vae ensemble learns better latent representations which can be converted between models with simple orthogonal linear transformations questions regarding the last term of the loss in equation 2 for a fixed i and j the loss is eqzijx zjj zij eqzijxzjj mji zii this loss term can be optimized by tuning the parameters of vae i vae j and mji do you backprop through all these or is there a stopgradient on zii when used in computing this loss term ie no gradients through vae i from this loss term what would be the effect of training the vae models in two stages independently first and then jointly in the ensemble would it help or hurt disentangling how would you express the total information cost of representing an image across the vaes in the ensemble say if you wanted to to compare the information rate to a single vae it doesnt make sense to add up the kl costs linearly suggestions it would help enormously to strengthen the findings and assertions regarding the effect of ensemble size and the gamma hyperparameter consider adding another disentanglement metric eg mig figure 5 in the appendix shows a larger effect on dto of the number of dims than the gamma hyperparameter this result and other results on celeba are perhaps worth describing in the main paper minor in figure 2a i assume the curves are overlapping does it help to use a log scale for the yaxis how are the latent dimensions sorted in figure 3 are the scores in table 2 across different training runsdocsep summary the authors introduce a novel vaebased approach for unsupervised learning of disentangled representations of image data the approach trains an ensemble of vaes along with pairwise linear transformations between their latent spaces the objective includes the elbo objectives for each vae as well as two additional pressures i an l2 similarity objective that pressures samples from each vae latent space to match under linear transformations samples from the other vae latent spaces and ii a crossmodel decoding objective that encourages decoding accuracy of the linearly transformed latent samples the authors provide a theoretical argument that the linear transformations should learn to be orthogonal and show some experimental results indicating that their model performs well compared to baselines when evaluated with an established disentangling metric pros the theoretical analysis in section 41 is clear and provides good mathematical intuition for the authors results the introduction and related work sections are clear and include a thorough set of references cons the authors baseline results give unexpectedly low metric scores the authors report factorvae metric values of 0665 for betavae and 0764 for factorvae on the dsprites dataset however the values reported in the factorvae paper and corroborated by others on the same dataset are significantly higher this makes me suspicious that something went wrong with the authors training perhaps they didnt train those baseline models to completion or something else went wrong having baseline results that are inconsistent with the existing literature makes me uneasy the traversals in figure 8 from the authors model are much less disentangled than other models in the literature for example they are much less disentangled than the traversals shown in the betavae paper and the factorvae paper on the same dataset thus from these traversals it seems that the authors model is performing worse than existing models in the literature the authors metrics indicate the opposite but as mentioned above im uncertain about the validity of those metric results figure 3a also suggests that the authors model is using too many informative latents ie not disentangling well i am not convinced by the authors intuitive justification in lines 216225 and appendix c that the crossmodel objective encourages entangled models to align to disentangled models specifically in that argument the authors seem to assume that orthogonal linear transformations are orthonormal however there is nothing to enforce normality of the transformations in the model hence the crossmodel encoding variance from an entangled to a disentangled model could be quite small the purpose of the crossmodel reconstructions is not clear particularly given that im not convinced by the authors intuitive justification of them the l2 regularization between the transformed encodings should pressure the crossmodel reconstructions to be good so i do not see the reason to include them in the model objective it would be good if the authors could do an ablation study without the crossmodel reconstructions the authors do not mention the computational complexity of their model yet computational complexity seems to be a significant drawback of it ensemble training is very computationally expensive so the authors should include some discussion about it as well as runtimes and memory requirements for their model furthermore with the crossmodel reconstructions the computational complexity of the authors model scales with the square of the number of ensemble elements which is quite a steep scaling the authors only compare to a couple relatively old baselines betavae and factorvae which are no longer stateoftheart however more recently a number of other vae models have been published that perform better in order to support their claims about stateoftheart performance the authors should compare to newer baselines here are a few examples dipvae variational inference of disentangled latent concepts from unlabeled observations kumar et al 2017 tcvae isolating sources of disentanglement in variational autoencoders chen et al 2018 spatial broadcast vae spatial broadcast decoder a simple architecture for learning disentangled representations in vaes watters et al 2019 the authors also dont include many metrics or datasets dsprites and celeba were used in the original betavae paper but more recently it has become the norm to test on a larger set of datasets and with a number of different metrics to convincingly show disentangling by the way a number of models datasets and metrics have been opensourced in distlib httpsgithubcomgoogleresearchdisentanglementlib which may be useful for comparing to more models with more metrics on more datasets summary i do not recommend accepting this paper baseline results are inconsistent with prior work the model seems to disentangle less well than existing methods and the authors dont do ablation experiments to justify the high computational complexity of the model ### Summary:
this paper proposes to use an ensemble of vaes to learn better disentangled representations by aligning their representations through additional losses this training method is based on recent work by rolinek et al 2019 and duan et al 2020 which suggests that vaes tend to approximate pcalike behaviour when they are trained to disentangle the method is well justified from the theoretical perspective and the quantitative results are good saying this the reviewers raised concerns about the qualitative nature of the learnt representations which do not look as disentangled as the quantitative measures might suggest there was a large range of scores given to this paper by the reviewers which has generated a long discussion i have also personally looked at the paper unfortunately i have to agree that the latent traversal plots do not look as disentangled as the metric scores would suggest and as one might hope to see on such toy datasets as dsprites the traversals are certainly subpar to even the most basic approaches to disentanglement like betavae for this reason and given the reviewer scores i unfortunately have to recommend to reject the paper this time around however i hope that the authors are able to address the reviewers concerns and find the source of disagreement between their qualitative and quantitative results for the future revisions of this work
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper tackles the problem of restricted class unavailability after a deep learning model has already been trained on such restricted classes and the aim is to remove any information pertaining to the restricted classes from the model parameters so that the model will not be able to correctly classify the restricted classes in the future the approach presented includes identifying the model parameters that are most relevant to the restricted classes and removing the restricted class information from these parameters gradient ascent while ensuring that these parameters can still be used for accurately classifying other nonrestricted classes with the need to correctly assess the utility of the proposed approach several baseline methods have been proposed empirical results on the cifar100 and imagenet1k datasets illustrate how the proposed approach can be used positives 1 the paper studies an important problem of tackling with restricted classes 2 the presented approach displays an ability to remove restricted class information from model parameters negatives 1 the paper is not very clearly written with concepts repeated several times and not clear description on some others that are mentioned below 2 while the problem is interesting indeed the motivation for the proposed solution is not clearly presented instead of repeating the ideas it would be helpful to have a few clear examples that illustrate the need to solve this problem as well as a clear description of the behavior of the said approach while an example about the company logo is stated it would be helpful to have a few more clear examples from realworld settings to help the reader one such example a model trained to predict which treatment would be beneficial for the patient would need to be altered if the treatment cannot be offered in the future due to ethical or resource constraints 3 while empirical results on the cifar100 and imagenet1k datasets seem promising it would be helpful to study this in the realworld dataset issues such as generalizability due to distribution shifts in the future and fairness considerations when certain labels are dropped are potential directions additional comments 1 the notation for the excluded and nonexcluded classes is a bit confusing as ce cr can both mean excluded or restricted i would suggest to change this the paper studies an important problem however there are some challenges with respect to the writing motivation of the solution and potentially several important directions that can be addressed docsepthis paper proposes a new learning setting of finetuning a pretrained model to forget some specific categories which is motivated by classlevel privacy the solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability the proposed method is experimentally demonstrated effective than possible baselines this paper proposes a new learning setting of finetuning a pretrained model to forget some specific categories which is motivated by classlevel privacy the solution to this challenge is firstly detecting the most related model parameters that significantly affect model performance on restricted classes and then tuning on a small number of examples with the losses of desired classification capability the proposed method is experimentally demonstrated effective than possible baselines i mainly have the following concerns 1 the motivation of the new setting is not strong in introduction the classlevel privacy is specified by violated privacy concerns and corresponding examples however they are not quite convincing to me and i feel more practical instances are needed to clarify the significance of studying classlevel privacy in particular in what situation there would be only a few training examples available when considering removing information of restricted class from model concerned with privacy 2 in related work individual data deletion ginart et al 2019 is cited but not properly evaluated following the work of data deletion i feel there also exists an important problem which is ignored that is making model forget some examples or some classes does not mean zero classification accuracy or random classification accuracy ie 1n in data deletion work ginart et al tune the pretrained model by only compensating the impact of deleted samples instead of forcing model have large error on them as a result the tuned model turns out to be never seeing the deleted examples this work obviously cannot guarantee it from the loss shown as eq 1 for example a 3way classifier on dog cat and leopard and leopard is the restricted class it is predicted a classifier training on dog and cat only would intend classify leopard to cat because of their natural similarity thus a careful clarification about this point is required in this paper especially from the view of classprivacy 3 the process of identifying parameters related to restricted classes seems quite empirically as a transformation component is needed from some prior knowledge the authors have mentioned it for images however many data privacy related data are also tabular in this case how to apply a proper transformation if this component is quite related to data format any workaround for this issue 4 from figure 3 kd is defined for remaining classes only but the kd loss also includes restricted classes 5 it is interesting to see the model performance comparison with the original training in term of remaining classes only also related to concern 2 the model performance on the original raining data of remaining classes only may be a good reference point for evaluation although original training data might be inaccessible in the proposed setting the paper has a weak motivation for the new setting the proposed method seems too heuristic and the evaluation for the new setting is not appropriate docsepin this paper the authors present a new method to remove information about specific classes from a trained model without reducing the performance of the remaining classes after the information is removed the model should not be able to identify the class anymore instead of retraining the complete model from scratch without the restricted classes the presented method only needs a few examples of the restricted classes and the remaining classes in terms of speed the presented method is 200 times faster on imagenet than a new model training without the restricted classes furthermore they present a method for identifying model parameters that are mainly relevant to the restricted classes the evaluation of the model is performed on the cifar100 imagenet1k and the cub200 dataset for a detailed comparison eight baseline methods were designed and evaluated an ablation study is performed on the class relevant parameters and the number of classes that are excluded the presented method achieves an accuracy close to the original model on the remaining classes in terms of accuracy also the forgetting prototype accuracy is close to the model trained only on the remaining classes introduction the paper is well written and the evaluation is very detailed it is an interesting idea to remove class information from the model with a limited amount of data however from the description in the paper it is not clear why this is a realworld problem it would be beneficial to rate the importance of this application if the authors had provided sources for such cases or a more detailed description of a specific scenario method the retraining procedure of the model with only a limited amount of training data is very detailed the description of the identification of the relevant parameters for the restricted classes is missing some details for example it is not defined what other transformation besides the grayscale transformation is used if other transformations are optional it would be good to know what type of transformations are used in the experiments furthermore it is not clear how the parameters with the highest gradients are selected is a fixed threshold used what is the minimum number of parameters of each layer that are selected is this a fixed number for each class does it depend on the number of excluded classes evaluation the evaluation is very detailed with eight baseline methods to show the performance of the presented method however the results of the fdr model are not shown in table 1 only mentioned in the text is there a reason for this adding the results of the fdr to table 1 would be beneficial it would be very interesting to see how the parameter selection influences the accuracy of the model unfortunately this is not part of the evaluation the presented method of retraining a model to forget a specific class is very interesting however the part for identifying the most relevant model parameter is missing some essential details for example how the parameters are selected manually or automatic or the number of selected parameters this information is essential to understand the method moreover the influence of the parameter selection method is not studied in the evaluation part docsepthis paper proposes a novel and practical problem called rcrmrld aiming to removel restricted categories from model representations with limited data they first give some direct solutions and analyze their weaknesses then they propose their own solution to discard the restricted class information from the restricted class relevant parameters experiments verify that this approach not only performs similar to fdr but also is faster than it pros 1 the problem rcrmrld seems interesting and practical which addresses the specific classlevel restriction by removing corresponding model representations this setting also save time and computational resource for large scale datasets 2 experiments in this paper are solid and convincing enough they design 5 basic baselines and perform corresponding ablation study considering rcrmrld is a new problem if there exists comparing some related works will be better cons 1 from my point of view the transformation f plays a key role in identifying the parameters that are highly relevant to the restricted classes however they seem only try the grayscale transformation and do not give more discussion about f if the model is just trained by grayscale images will this method fail for natural language tasks what transformation are you going to use i suggest that the authors make more discussion and comparison of the various transformations 2 from table 1 i find all the fpae of erwp are relatively high indicating that the feature representations of the model still contain much restricted category information although they indeed remove restricted category from classlevel attackers still can use some model inversion techniques eg 1 to restore the restricted class data with few owned ones leading to privacy leakage 3 identifying those parameters that are relevant to the restricted classes through erwp is still heuristic i admit that erwp seems to make sense but some verifications about this claim need to be included 4 except for related work i do not find any references in this paper at least in the introduction you should cite some related works to support your claims 5 colloquial expressions and grammar issues are common thus the writting needs further improvement typo 1 baseline 4 training of original model on limited nonrestricted class data with tolnrc missing words 2 ne and nr refer to the number of excluded classes respectively missing words and so on questions during rebuttal period please address and clarify the cons above references 1 zhang y jia r pei h et al the secret revealer generative modelinversion attacks against deep neural networks cvpr 2020 the setting proposed by this paper is novel and practical however there exists some technical flaws that need to bu further solved please see the main review for details ### Summary:
the paper proposes a technique to efficiently retrain a model when a small number of classes are required to be removed reviewers in general like the paper but the key issue is motivation for the problem the motivating examples in the rebuttal are not very good because a authors do not provide any evidence that such situations are critical or commonplace b the data points that are available for retraining might be very biased a more careful grounding of the work would be important to motivate the iclr community and the ml community in general to further study this problem but for now unfortunately the paper does not seem ready for publication at iclr
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper demonstrates a rank diminishing behavior of deep neural networks considering the mapping from the input space to the feature space of an increasingly deeper intermedate layer theoretically it proves that the rank doesnot increase as the layer depth increases experimentally it demonstrates a general decreasing trend of rank on various nn architectures this work also empirically demonstrates that the number of major pca components at the final feature layer is much less than its ambient dimension which leads to feak correlation between very different categories strenghs 1 this work systematically studies the evolution of function rank throughout the layer computation and provides theoretical jusfication to the empirically observed rank diminishing behavior 2 the finding about the independence deficit of final feature manifolds is very interesting and provides insight to the lack of robustness of dnns weakness 1 classification dimension estimated by the number of major pca components in this work is not a good indicator of the feature dimesnion in fact a very low dimensional manifold can have high classification dimension therefore the main results about rank diminishing cannot explain the interesting finding about low classification dimension of final feature manifolds the statement in the abstract that independence deficit caused by the rank deficiency of deep networks is misleading 2 it seems that the definition of the rank of function and lemma 1 implicitly assume that the jacobian of neural network functions has a constant rank over the entire input space of rn this is a strong assumption that doesnot hold in general when this assumption holds for neural networks should be carefully discussed the authors adequately addressed the limitations docsepthis work aims to study the rank of hidden layer representations of neural networks in relation to how deep the layer is in the network in particular they note that the rank of the hidden layers diminish monotonically as we observe deeper layers numerical measures of rank are proposed and motivated the primary theoretical concerns are the rank of the jacobian from the input to the ith layer of the network essentially a linear approximation of the network mapping to that hidden layer and the dimension of the feature space for a hidden layer the paper further investigates the tolerance of the final hidden layer to dimensionality reduction by applying pca to features space and projecting onto a decreasing number of eigenvectors the number of eigenvectors remaining when a significant drop in performance is observed from the dimensionality reduction provides an approximation for the intrinsic dimensionality of the hidden layer finally the paper explores the idea that it is possible to use the logits of different categories to classify another category in a dataset one example is that by merely using 0923 as a weight on the logit for the triumphal arch category it is possible to predict the junco category without loss of accuracy strengths originality the paper is fairly original with the primary novelty being the rank metrics used and their justification additionally the paper touches on some possible connections between symmetry and rank which to my knowledge have not been explored however these connections are mainly pointed out but not discussed or treated theoretically quality the need for the numerical tools to measure rank is well motivated and the numerical tools themselves makes sense and are justified the claims that are made appear correct and inline with the evidence presented clarity there is some variance in the clarity of the paper for various sections the writing is clear and understandable and the mathematical notation is consistent and intuitive which helps the clarity in the earlier sections greatly sections 32 and 33 are examples where the notation made potentially tricky sections more manageable figure 4 stands out as a very helpful figure the effort on that is definitely worth it significance the paper touches on some significant points like the point linking symmetries to lower ranks the pca experiment and the experiment on the using categories as predictors for others may be of general interest to the ml community weaknesses originality a primary concern of this work is the fact that rankab leq minranka rankb this is even mentioned in the paper below equation 8 and is one of the primary tools for the work this however is a well established principal and quite intuitive thus the finding that the rank of the network decreases with layer depth is not surprising two possible interesting points noise increasing network rank and structure avoiding the rank staying the same across layers are mentioned but do not form part of the analysis the noise aspect is ignored in the theory and removed through the noise tolerant rank measures the point on monotone decrease over equality of rank due to structure is discussed briefly quality the various sections of the paper feel quite loosely connected up to section 4 the work considers whether the rank of the network decreases monotonically the section 5 considers pca just on the final feature space and is the used to point out that low dimensional feature spaces do not hold semantically meaningful features for each category in section 6 these sections are all related to rank however the connections do not seem to go deeper than that finally there are some points where unjustified claims are made or the phrasing makes these claims appear unjustified two examples are theorem 5 that investigates the behaviour of all singular values of deep neural networks when theorem 5 requires hidden layers of the same size and assumes the jacobians have gaussian elements which appears to be unrealistic in its own right and the principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components where it is not clear that relu networks would even fit this requirement clarity theorem 4 and theorem 5 which are the most technical aspects of this paper are not given enough space the clarity of the paper could benefit greatly from a more indepth treatment of this section in addition how the theory of these sections relate to figure 1 could also be explained more for example i acknowledge that the shape of the bottom row of figure 1 is nonlinear but to call it exponential which theorem 4 and 5 predict might also be a stretch understanding theorem 4 and 5 would help with interpreting figure 1 figure 1 could also use different colours especially for the bottom row where distinguishing between jacobian and feature rankdimension is not easy finally the notation of section 5 is not easy to follow particularly in the meaning of the ij double subscript where it is not immediately clear what i and j each refer to figure 4 does help clarify this a lot and with space constraints fully explaining the new notation may not be feasible significance this work appears generally significant however its significance is hindered by the same issues noted under the originality section i feel that this work might spend too much time on the potentially quite obvious points of rank diminishing and on introducing the partialrank and not enough time on the potentially significant points such as theorem 4 and 5 my primary recommendation would be to rephrase the work more in line with those theorems i suggest that the authors be clearer on the conditions required for their theory to help for example saying the principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components which does not seem to include relu networks but is described as general is unclear docsepthe paper studies the dynamics of the rank evolution of the feature maps of a neural network as a function of its depth by leveraging the abstract definition of rank of a function as the rank of the corresponding jacobian matrix the authors can study the rank dynamics in full generality ie without assuming any specific architecture this results in theorem 1 principle of rank diminishing that finds that the rank of neural network should never increase with depth due to its compositional nature a neural network can be see as a composition of l functions where l is the depth then the authors analyze conditions under which the rank strictly diminishes theorem 3 and convergence of the rank to specific constants theorem 45 finally the authors apply their low rank findings to the study of the dependence and correlations between different output classes they find that the output of some classes of imagenet eg hamster can be predicted with a linear combination of the output for irrelevant classes eg broccoli and mouse trap the authors attribute this problem to the low rank representations of very deep network as showed by their developed theory strengths 1 generality and importance of the results the theoretical results are very general and remarkable abstracting away from the specific architecture the only assumption is the compositional nature of the layers which includes most of architectures but excludes residual networks as the author mention in the supplementary material 2 paper organization the paper is very clear in explaining the abstract concepts of the first part until theorem 2 page 4 the theory is easy to digest at first read theorem 1 seems trivial if one thinks about linear networks ie simple product of matrices and the famous property textrankab leq mintextranka textrankb but the author do a great job to generalize it to any composition of functions through ideas from topology theory the other two theorems delve deep into the rank diminishing properties of function compositions showing an exponential decay of the rank with depth 3 independence deficit of feature manifolds section 5 provides a nice application of the theory and would probably cause follow up works in trying to understand how one can reduce this undesirable effect of strong dependences between semantically different classes weaknesses 1 inconsistency of residual network skip connections are proposed as a tool to partially prevent the rank deficiency problem and they give a brief theoretical argument in the supplementary material however this seems to be in contradiction with figure 1 where an exponential decay of the rank is observed for resnets mlpmixers and transformers all architectures that adopt skip connections this could be due to the fact that during training the magnitude of textresxi becomes large hence lowering the rank at initialization the magnitude of textresxi can be controlled eg with an appropriate factor inversely proportional to the depth see for instance 1 for this scaling and 2 for its consequences on the rank in any case i found it confusing that skip connections are adopted in almost all the architectures used to exemplify the theory skip connections that according to the authors should have an opposite effect 2 minor presentation style of structural and implicit impetus after brilliantly explaining the principle of rank diminishing in my view the concepts of structural impetus due to the specific architectural modules and implicit impetus due to the very compositions of infinite modules of rank diminishing could be better explained in particular i would invest some extra lines to better explain why normalization layer prevent rank diminishing and maybe better introduce some concepts or instance moving along directions of theorem 3 is not properly introduced and in general the current version of the theorem fails to convey a simple and intuitive explanation 1 hanin boris and david rolnick how to start training the effect of initialization and architecture advances in neural information processing systems 31 2018 2 noci lorenzo et al signal propagation in transformers theoretical perspectives and the role of rank collapse arxiv preprint arxiv220603126 2022 i do not see a negative societal impact of this theoretical work docsepthis work presents some theoretical results that imply that the rank of the jacobian between the inputs and features of deep networks is nonincreasing with depth they predict that in some settings it should in fact decrease exponentially with depth to some fixed value they also develop efficient methods to estimate the jacobian rank of real networks and show empirically that it indeed decreases with depth across a number of different architectures the effects of depth on the learned representations in deep networks and their geometric structure is an important area of study while this work contains an interesting combination of theoretical and empirical results i believe the connection between the two would have to be made more concrete the result about nondecreasing rank follows from the basic compositional structure of the network as the authors suggest yet it is unclear that the rank must decrease in fact there is a vast literature on signal propagation in deep networks that approaches this question from a different angle by studying covariance between hidden features as a function of depth in which case convergence to certain fixed points should essentially be equivalent to the rank of the representation collapsing 1 2 this literature also highlights ways to avoid this phenomenon with a careful choice of initialization and relies on modeling the dynamics of the correlations as a function of initialization hyperparameters this allows one for example to train convnets of depth 10000 2 in the simplest case of a network with orthogonal weights and no nonlinearities it is clear for example that there is no decrease in rank so there are clearly ways that it can be avoided another related issue is that the results are vague in the sense that the behavior of the rank is not connected in a quantitative way with the structure of the network ie the choice of nonlinearity initialization etc i think the submission would be much more compelling if the results could take these into account and make predictions about their effects on the rank for example how is the rank one converges to or the speed of the rank decay related to properties of the network an additional related concern is the connection between the experiments and the theory the experiments that attempt to show exponential decay of the rank are not plotted on a logarithmic scale which makes it hard to understand whether the decay there is indeed exponential or follows some other law in addition it appears that the rank decay in the case of resnets may be influenced more by the pooling layers or changes in width than any other operation yet no mention of this is made in the text 1 poole ben et al exponential expressivity in deep neural networks through transient chaos advances in neural information processing systems 29 2016 2 xiao lechao et al dynamical isometry and a mean field theory of cnns how to train 10000layer vanilla convolutional neural networks international conference on machine learning pmlr 2018 limitations have been addressed ### Summary:
this paper studied the rank of neural networks and showed that deeper network in general will have lower rank the paper did a detailed empirical study on network rank as well as some theoretical insights on why rank is likely to decrease as the network becomes deeper and how the rank decrease can change with or without normalization layers the paper also demonstrated a independence deficit phenomenon which happens when the rank of the output layer is too low overall the reviewers feel that the paper gives interesting observations and nice intuitive explanations
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper is based on the sign agnostic learning sal method for capturing signed distance functions with neural networks it extends this method by incorporating derivative information which interestingly can likewise be handled in a sign agnostic manner maybe i missed this somewhere but if the derivatives are sign agnostic couldnt it happen that the inside is positive did the authors encounter that in some cases the paper presents and motivates this extension together with an additional theoretical insight about the minimal surface property of sal and sald in line with sal the paper presents a nice variety of results for shapes from different shape databases the quantitative results are also convincing its interesting to see the substantial difference between the vae and ad architectures for the comparison with sal its good to see the direct improvements from the derivative loss with a vae the paper leans heavily on sal and the change in terms of the overall method seems to be fairly small nonetheless i think its an interesting insight that the sign agnostic derivatives can be included in this way and i found it interesting to see how much they improve the results given that learning signed distance functions is a very active topic and a very useful building block for a variety of adjacent works that use learned sdfs the proposed sald approach seems like a very nice advancement of the state of the art so overall i really liked the paper figure 2 alone is impressive and makes a good case for the method together with the nice presentation and set of results i think this paper makes for a very good addition to iclr docsep this paper presents sald a new type of implicit shape representation that in addition to predicting the signed distance function aligns the gradients of the distance function with that of the neural distance field the resulting algorithm for example has improved approximation power and better preserves the sharp features than the ancestor sal sign agnostic learning the formulation is such that the architecture can consume raw point clouds strengths this paper certainly speaks to me first of all learning implicit representations directly from raw point clouds can allow for interesting applications such as better generative models or efficient 3d reconstruction networks the approach is very sensible in fact aligning gradients of the implicit surface with the ones of the data is not a new idea and has been done for instance in quadric fitting birdal t busam b navab n ilic s sturm p 2019 generic primitive detection in point clouds using novel minimal quadric fits ieee transactions on pattern analysis and machine intelligence 426 13331347 tasdizen t tarel j p cooper d b 1999 june algebraic curves that work better in proceedings 1999 ieee computer society conference on computer vision and pattern recognition cat no pr00149 vol 2 pp 3541 ieee the paper might benefit from including those especially because it has related work sections called primitives and implicit representations this is not a drawback but just the opposite there is a strong prior evidence that such approaches are useful i also like that the authors spend a reasonable amount of effort for theoretical analysis though i believe that this can be extended to more realistic scenarios as the authors aptly explained in the limitations weaknesses issues in addition to aligning the gradients many works benefit from constraining the gradient norm of the implicit function be nabla 1 see for instance slavcheva miroslava et al killingfusion nonrigid 3d reconstruction without correspondences proceedings of the ieee conference on computer vision and pattern recognition 2017 can we think of a similar approach here could the paper show some ablations with regularizers concerning the gradient norm nowadays the use of implicit 3d representations is omnipresent in the evaluations would it be possible to compare against the variants of deepsdf eg curriculum deepsdf or metasdf etc with that it might also be nice to include some more qualitative results in the supplementary would it be possible to include additional real objects that are nonhumans this might involve for instance cars in an autonomous driving scenario some discussions on the following aspects could be valuable for the reader i what would be a good suggestion to handle thinstructures it seems to be a common issue among many sdflike methods ii the use of raw point sets is good but such data usually come partially observed could this method support partial observations if not could there be workaround the chamfer distance and the variations thereon are obviously not well suited to assess the accuracy of the deep implicit representations this creates an urge for better quantitative metrics maybe the data driven ones for the future i would strongly suggest thinking about those to have more meaningful evaluation data some minor remarks can we already compare d and d and give an intuition about what they might refer to at the place they are first defined they strives to they strive to tested sald ability tested salds ability the surfaces produces the surfaces produceddocsepthis paper studies how to generate meshes from raw point clouds in particular this paper proposes a framework which is built on top of recent sign agnostic learning sal work compared to sal this work adds a gradient penalty term which encourages the derivative consistency the problem studied in this paper is important however the proposed method is very incremental and has several motivation issues i summarize the pros and con as follows pros 1 the idea of using gradient penalty to learn sharp signed distance function seems convincing in figure 4 the proposed method preserves sharp features compared to its counterpart sal 2 this paper presents a theoretic intuition why sald works under uniform distribution assumption sald finds the global minimum cons 1 my biggest concern is the motivation to learn sign distance function from its unsigned observations for data shapenet and faust used in this paper signed distances are immediately available one can easily convert a mesh to its implicit representation to me learning signed distance function as deepsdf does is more convincing since the direct supervision is available so why does this method bother to learn the proxy objective unsigned distance function 2 following 1 the most obvious application of this paper would be learning signed distance function when the distances are not available the input is either lidar scan or depth image in that case if the paper can reconstruct realistic 3d models it will be much stronger 3 to some extent this paper uses neural networks to learn sign priors from data there are multiple existing works on this direction which this paper doesnt mention or briefly mentions but doesnt compare to eg deep geometric prior for surface reconstruction and point2mesh a selfprior for deformable meshes the paper should at least explain the differences of the tasks if it doesnt compare to them 4 in the implementation detail the paper says it uses a similar architecture to deepsdf in the autodecoding case however the method shows improvements over deepsdf this seems impossible given that deepsdf learns from direct signed distance supervision so i am wondering if this is due to model size difference id like to see more comparisons to deepsdf under exactly the same model capacity docsep summary of paper and contributions sald extends prior work on sign agnostic neural implicit shape representations to include a loss term on the derivative of the implicit function the authors justify the benefits of derivatives in 2 ways a by citing prior work 1 which shows empirically that derivatives decrease sample complexity of deep relu networks and b by showing qualitative improvements over sal without derivatives the authors show qualitative evidence that global minimizers of sign agnostic losses with and without derivatives satisfy the minimal surface property a desirable property of solutions in commonly discussed the surface reconstruction literature they demonstrate this property via 2d experiments and via a motivating theoretical example finally the authors show their loss function can be integrated into existing generative shape modelling pipelines comparing results on shapenet and dfaust against deepsdf which requires precomputed sdf data and sal which can operate on raw inputs on the benefit of using derivatives the authors cite 1 to motivate the benefit of including derivative terms in the loss in the case of deep relu networks such as the one used by the authors this prior work shows an empirical reduction in sample complexity when regressing low dimensional functions section 41 motivated by a theoretical intuition section 3 while the neural implicit functions learned by sald are indeed low dimensional the shapespace learning problem is not it learns a map from a point set consisting of many points or a high dimensional 256 in the sald case latent code to an implicit function given this i dont believe the authors can simply claim a reduction in sample complexity by citing 1 without demonstrating further experimental evidence especially given the fact that the experiments in the paper do not show sald drastically improving over sal in particular i would be more convinced by an experiment showing the degradation of sal vs sald as the number of available samples for a shape is decreased when a regressing a single shape directly from data such as in igr 2 section 6 and b regressing a shape using an autodecoder minimal surface property showing that global minima to sal may satisfy the minimal surface property is indeed quite interesting i do feel however that the claim in the paper regarding this is a bit oversold in particular we prove that sal enjoys a minimal length property in 2d abstract and identifying and providing a theoretical justification for the minimal surface property of sal end of section 1 the minimal surface property is well known in the surface reconstruction literature eg 3 cited by the authors in section 3 and the theorem shown by the authors appears to be for a specific case in 2d unless i am missing something while these results are not trivial i feel the contribution should be rephrased to something along the lines of we give empirical evidence and theoretical motivation that minimizers of saltype losses produce solutions satisfying the minimal surface property citation experimental evidence i feel like the choices of datasets and baselines are sufficient to show the effectiveness of sald there are two experiments however which i feel are missing from the paper 1 the sample complexity experiment described above 2 some kind of performance evaluation i imagine that computing losses on gradients of networks is quite expensive how much is the increase in runtime compared to the gains in accuracy summary of review generalizing sal to include derivative quantities is a natural next step for this line of work the authors show that sald improves performance over the state of the art on shapenet and performs comparably on dfaust while these results are great i feel the paper is missing a few key experiments described above and that the claims around the minimal surface property are a bit overblown i am rating this paper as marginally below the acceptance threshold but am more than willing to increase my score if the authors make the requested revisions or give a strong justification as to why they are unnecessary in their rebuttal references 1 czarnecki et al sobolev training for neural networks 2 gropp et al implicit geometric regularization for learning shapes 3 zhao et al fast surface reconstruction using the level set method ### Summary:
congratulations the reviewers unanimously viewed this work positively and were in favor of acceptance to iclr while the current revision already addresses many reviewer concerns it may be worth adding some of the datasets pointed out by r3 or comparing to some of the papers suggested by r1
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a method to predict when and which two nodes in a graph community will be linked ie when and what event will happen rather than taking the whole graph into account the paper first leverages community detection algorithms to divide the network into subgroups and then performs event prediction within each group more specifically gcn or message passing is utilized to capture the topological information and the temporal point process is utilized to capture the temporal information strong points 1 the paper studies an interesting problem it is of practical significance to predict events within a community of a graph 2 the paper is wellmotivated and shows clearly the difference between it nad existing methods 3 it is not hard to understand the main idea of this work and the proposed model albeit simple indeed makes sense 4 the paper did lots of ablation studies to verify the effectiveness of the proposed model 5 code enclosed weak points 1 the paper repeatedly claims that it is the first to jointly predict the next events incident nodes and timestamp within a certain community however existing methods can already achieve this goal eg transformer hawkes process and its followup work 1 2 although the main idea of this work is clear the description of the model sec 3 is a bit hard to follow due to the huge amount of notations used moreover many symbols are not clearly explained some notations are in bold and some are not and the subscripts and superscripts are also confusing eg eqs 2 and 3 is eq 5 correct 3 the hierarchical probabilitychain forecaster is straightforward and i cannot learn too much from this design besides one concern is that in practice some links are undirected in such cases will the order of the factorized terms matter 4 in table 3 it is shown that the proposed model underperforms cep3 without using hierarchical factorization significantly although cep3 is faster the big drop in performance makes the model less attractive 5 stronger baselines eg transformer hawkes process and effects of key parameters eg l should be included it is also super important to use different community detection algorithms to divide the network into subgroups questions 1 why is ar forecasting not useful sometimes it looks like that using ar forecasting should do no harm to the model performance 1 zuo simiao et al transformer hawkes process international conference on machine learning pmlr 2020docsepthe paper describes a model for predicting events in a dynamic graph unlike most previous work the model predicts both the incident nodes and the time step of the event jointly rather than only predicting one given the other the model formulation and training is based on temporal point processes suggestions 1 the paper essentially proposes a new task a new model new baselines and a new evaluation metric which makes it difficult for the reader to judge how effective the model actually is i think carefully designed easy to interpret baselines are therefore crucial currently the authors only compare against neural network baselines which are all fairly similar to the proposed model i would strongly suggest comparing to a more straightforward baseline for example simply predicting past events with the same time interval as before or an average time interval in my experience dynamic graph data sets largely consist of repeated events meaning even such a simplistic baseline might perform fairly well the examples in figure 3 are encouraging but it is hard to judge how representative they are 2 in a similar vein as my comment above it would be informative to add the percentage of unique edges to the data set statistics and compare it to the predictions of the models again this would provide more context for the results 3 section 3 in particular subsection 31 could benefit from some major improvements in terms of clarity the paper claims that the model does not require unrolling because it uses a pure attention mechanism however at the same time it makes repeated references to recurrent states eg l 139140 151 among others it is unclear whether this refers specifically to the version of the model with an rnn cep w rnn or whether it is common to all models furthermore subsection 33 refers to autoregressive message passing which seems to also require a hidden state a more structured clear exposition would be helpful minor comments exponential in eq 7 is typeset incorrectly i can see a number of positive aspects about this work the task is carefully and elegantly designed and appears more practically useful than the formulations addressed in prior work i believe having an effective wellmotivated model for this task would be a great addition to the literature in this field and the proposed model looks like a step in the right direction it is commendable that the authors consider the scalability of their approach in my experience a lot of prior work is computationally expensive and this task is particularly interesting for large productiongrade graphs while the paper primarily combines existing neural network components i believe the novelty of the work is sufficient for publication given it addresses a relevant task that is of interest to the wider community however in its current form i am reluctant to recommend acceptance simply because the merit of the paper hinges a lot on the effectiveness of the proposed model which to me remains unclear if the authors could incorporate some of my suggestions above all an additional easy to interpret baselines i am happy to raise my scoredocsepthis paper mainly studies the forecasting problem on continuoustime dynamic graphs the main motivation is to jointly forecast multiple link events and their timestamps over dynamic graphs for this aim the authors propose a united model composed of graph neural networks and marked temporal point process for scalability the authors further propose to factorize the joint prediction problem into three easier conditional probability modeling problems experiments are conducted to show the improved performance in effectiveness and efficiency strengths 1 the paper is easy to read the organization of this paper is clear 2 it is wellmotivated for the studied problem its interesting to jointly consider forecasting link events and timestamps on dynamic graphs 3 the experiments part seems to be convincing with the new benchmark for the community event forecasting task weakness 1 it is an incremental work of existing forecasting methods on dynamic graphs 2 in the method part some proposed architectures are not explained very well for example why design hierarchical probabilitychain architectures as forecaster is it better for performance it will be better if some intuitions are given 3 in the experiments part some recent baseline algorithms are missing for comparisondocsep summary the paper looks at the task of event prediction within communities of continuous temporal dynamic graph ctdg it aims at jointly predicting the event time and the two nodes involved in the event with the cep3 method cep3 combines a gnn encoder a mtpp forecaster and a autoregressive message passing component to break the joint probability on event type and event time in conditional probabilities which is more scalable wrt the number of nodes the paper also propose evaluation experiments to measure the quality of entities and timestamps predicitons reasons for score overall i vote for weak reject the event prediction is clearly introduced and well formalized my major concerns are about the model presentation and the experimental setup see cons below hopefully the authors can address my concern in the rebuttal period pros 1 the presentation of the event prediction task on ctdg communities is clear and mathematically well formalized 2 the cep3 model combines different techniques to solve a new task in a fairly scalable fashion 3 this paper provides experiments which evaluate different parts cep3 it includes an ablation study and the evaluation of both entities and event time predictions cons 1 related work the related work description is spread over the paper and the appendix this makes sometimes redundant or harder to identify relevant related works more specifically l 5071 sounds a bit redundant with eg section 21 further the tpp related works cite only two related works while the literature is quite rich in this field as described in 28 some related works for tpp are only mentioned in appendix action suggestion i feel that concentrating the related work description at one place would improve the paper i would also extend the tpp related works by eg using the survey 28 and partly moving appendix b to the main paper 2 model 1 the model description sec3 is sometimes hard to follow the paper introduced a very large number of mathematical notations eq3would need some explanation even if it relies on previous works what is the meaning of each variable in eq3 the meaning of bold variables was unclear to me should bold variables be used for all vectorsmatrices what is the difference between the bold and not bold zil t in 3 should the vectors be denoted with arrows like in eq5 the notations are sometimes not consistent eg the neighorhood of v in eq2 and l 147 probably a typo action suggestion only present necessary equations in the main text to reduce the number of notations make mathematical notations more consistent 2 it is not clear to me why forecaster is hierarchical and it is not explained in sec 32 action suggestion explain the hierarchy aspect 3 specifically we initialize hatg0 with the candidate node set c as its nodes two candidate nodes are connected in hatg0 if their distance is within l hops the resulting graph encompasses the dependency between candidate nodes during the encoding stage this sentence was unclear to me 3 experiments 1 the evaluation is very dependent of the predefined communities the communities are computed with only the louvain algorithm which heavily suffers from the resolution limit especially for large graphs resolution limit in community detection fortunato et al it would be interesting to report results for different community sizes and number of communities for each dataset action suggestion take other clustering algorithms eg linkage algorithms spectral clustering to define communities in the experiments performs the experiments when the number of communities changes this is also possible with the louvain algorithm by changing the resolution parameter 2 eq 12 is supposed to evaluate the predictions uiviti but these predictions notations do not appear in the pp formula 3 eq13 compares ti and ti while the true even might be different from the predicted event thus it is possible to achieve good mae while the model is completely wrong in terms of entities since a key contribution is the joint prediction it would be more convincing to provide an experimental evaluation of the joint predictions action suggestion explicitly mention the limitations of this evaluation propose an experimental evaluation of the joint predictions 4 i appreciate the will to show results visualization as in fig 3 however fig 3 did not convince me that cep3 is better than dyrep in this specific case dyrep does not less similar to the groundtruth than cep3 action suggestion maybe another color scheme would show a better visualization another idea is to complement the plots with a quantitative metric measuring the distance to the gt next to each plot others i feel that it would be appropriate to cite the works who introduced ctdg framework in line 30 without these citations it is hard to understand where this common representation comes from typo thea l82 i m happy to improve my score if a majority of the above points are addresses eg with action suggestions post rebuttal i believe that authors improved the paper by providing clarifications and discussing the limitations of the work therefore i increased my score from 5 to 6 ### Summary:
we agree with the ac that this paper is ready for publication we encourage authors to incorporate suggestions for clarity improvements in particular those mentioned by reviewer xrhc
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents new convolutional and pooling operators for protein structures these components are used to design an architecture that shows strong performance on several downstream tasks the main strength of the paper is the presentation of new ideas for modeling protein structures the proposed operators leverage the intuition behind convolutional networks but extend them for the protein case eg by introducing rotational invariance in addition to translational invariance the ideas themselves are interesting to machine learning researchers and useful to those working proteins due to the complexity of the model i recommend that the authors release their code so that other researchers could evaluate these ideas on additional problems the writing and presentation is clear weaknesses more updated baselines should be used for example for the sequenceonly baselines the authors should compare to protrans elnaggar et al 2020 or rives et al 2019 which show better results than the baselines used here on the structural side the authors should compare to the architectures proposed by du et al 2019 or anand et al 2020 a key sequence baseline is missing multiple sequence alignments the only tasks considered are classification tasks the paper could be improved by evaluating on more practical tasks such as protein design eg the tasks in du et al 2019 or anand et al 2020 the architecture described here could be very useful in those settings the authors compare to bepler et al 2019 which is a great baseline since it uses both sequence and structural information however it appears from the text that the authors used the version of this model provided by rao et al however rao et al simply used the architecture from bepler et al and retrained it on sequence data only therefore i recommend that the authors retrieve the weights from bepler et al directly on the fold classification task the hardest test set considered is fold in which proteins from the same superfamily are not present during training it would be interesting to evaluate the model on a harder generalization setting in which proteins from the same fold are also not present during training the delta between this model and deepsf decreases when the sets go from family superfamily fold to complete the picture it would be important to go one step further relatedly the authors have not demonstrated that the models can generalize to novel folds without demonstrating this the model cannot be used for important tasks such as protein design the paper would be much more compelling if the authors could show that their architecture generalizes better than prior work to accomplish this the authors would need to move beyond a classification framework toward a clustering framework because its impossible for a classifier to predict novel folds the names of the test splits on the fold classification task is nonstandard generally fold split means that proteins from the same fold are not included in the same set superfamily split means that proteins from the same superfamily are not included in the same set etc what the authors call the family split in which proteins of the same family are present during training is usually not included as overfitting to memorizing the training set could still result in good performance perhaps this is why the proposed model scores 989 here to summarize the weaknesses more work is needed on the baselines and metrics additional evidence is also needed to support that the model can generalize to unseen folds overall this paper is a great start and the proposed model architecture could be interesting to ml researchers and practitioners in the biology space in its current state this is a borderline paper because it is missing a critical component of generalization of novel folds which is necessary for this model to have significant impact in the field if the authors can resolve my concerns during the rebuttal period i am willing to raise my score update the authors have included an additional experiment around fold generation in sec 66 however no baselines are included so it is difficult to understand the result in context and understand how this method generalizes compared to existing methods the authors have also included two additional baselines bepler et al and msas more analysis is needed to compare this with sota in representation learning the authors compare to elnaggar et al 2020 but it isnt clear which model was used elnaggar et al 2020 have released a series of different models the authors should clarify this in the cameraready and ensure they used the best models released by elnaggar et al i have increased my scoredocsepthis paper describes a deep learning architecture for representing and performing classifications on protein structures the representation involves three different distances euclidean distance and the shorted path between two atoms where edges are either along covalent bonds or also include hydrogen bonds each atom has a vector of associated features and convolution is accomplished by defining a kernel on all three distances and then summing the features of each neighboring atom weighted by the kernel value the paper also proposes three proteinspecific pooling operations to cope with the large input size when representing all atoms in a protein overall this is an extremely clear paper and the core ideas appear to be sound furthermore the experimental validation is quite extensive and the results are impressively good some positive points are that the authors consider several different tasks and numerous stateoftheart methods are included in the comparison i particularly appreciated the careful ablation study demonstrating not just that the entire system works endtoend but that the various pieces each contribute to its behavior the experimental setup appears to be valid there is always the chance that these results could be optimistic due to presumably unintentional model selection happening during development of the proposed method or because of a mismatch between the training data used for the published models and the test set used here but i cant see how the authors could have done a better job to guard against such issues other than the obvious step of making their code and trained models publicly available it is unfortunate that the manuscript makes no mention of this one drawback to this work is its focus on recent literature i found it strange that the earliest citation in the related work section is from 2013 the tasks being solved here have been the focus of extensive research going back 25 years or more the manuscript is up front about the fact that a drawback of the method is its requirement that the input proteins have known 3d structure however another potential drawback is that the input does not take into account homology information drawn from eg a sequence similarity search over a large protein database this information is typically represented as a pssm column for each observed amino acid i would like to have seen this acknowledged since it seems like a potentially valuable source of additional information a minor point the introduction states that the model captures primary seconary and tertiary structure and then says that as chain bindings affect the tertiary structure the quaternary structure is captured implicitly but of course this argument could apply to any of the other levels amino acid sequence implicitly captures secondary and tertiary structure incidentally the murzin cite has an incorrect year 1955 docsepthis paper proposes a graph neural network architecture that operates on the atoms in a protein structure it proposes a specific multigraph and pooling model structure constructed using euclidian distance and 3 types of edges euclidian covalent and hydrogencov there are three consecutive levels of granularity with nodes corresponding to 1 atoms 2 amino acids and 3 grouped amino acids the model is used to make a global prediction for a protein and results are presented on the taks of fold classification and reaction classification i recommend rejection for the paper in its current form based on the concerns about the relevance of this method for fold classification 12 experiments its framing as representation learning and its framing as convolution vs graph neural networks strenghts the two key model choices feel like a powerful choices for a graph neural network with an interesting domainmotivated set of architectural choices construction of the hypergraph with shortestdistance edges of 3 types custom pooling of the graph from atom level nodes to groups of amino acids the paper has helpful visualizations and wellwritten however see below for concerns around framing excellent ablation study in table 2 weaknesses the key weakness is that the protein structure has to be provided as input to the network therefore fold classification is a flawed experiment as the full atomic coordinates is all thats needed for perfect assignment to the folds specifically comparisons to sequenceonly classification tape unirep etc are misleading as the authors point out the amount of available data is tremendously less than sequenceonly models in fact the framing as representation learning is odd in this context as there is no way to leverage unlabeled data and no selfsupervised objective is proposed i find the framing of the method somewhat misleading on a few counts a representation learning see remarks abvove no selfsupervised transfer of features b naming the core layer of the model a convolution on 3d protein structures is off a crucial element of standard convolution is the regularity of the domain while this is intrinsically graph structured data furthermore i believe the method still fits in the neural message passing framework see bullet below therefore the proposed architecture seems to be much better summarized as message passing graph neural network on a hierarchy of multigraphs hierarchy through protein pooling with 3 types of graph edges defined by bonds and euclidian distances c after pooling when vertices dont correspond to atoms but to clusters the proposed convolutiongcn does not directly apply anymore what are the edge connections at this stage i disagree with the phrase although this operation could appear similar to message passing algorithms they differ significantly i believe the method fits in the mpnn framework roughly as follows notation following gilmer et al 2017 renaming x xi to v w hidden state per node fv with v the node either atom group of atoms amino acid cluster of amino acids depending on stage in the hierarchy edge introduced if euclidian distance dvw me edge features 3 distances clamped 01 learned message function m kappaevw fw hvt1 mvt1 or possibly including batchnorm and relu the above reformulation is quite close to gcn from kipf welling 2016 but with more complex learned message function function of the 3 distances writing clarity for eq 1 the notation needs to be introduced with dimensions x kappa f specifically for kernel kappa it needs to be made clear that kappaj is a function from r3 to r comments on experimental results as mentioned above i think fold classification is not an appropriate benchmark for this model for enzyme reaction classification a sota method on this problem should be included as benchmark ryu et al 2019 deepec seems relevant here or a method based on hmm profiles edit reply to authors response and updated paper also see strikethroughs in the original review above fold classification let me withdraw my concern here and will defer to other reviewers ac judgement if this task makes sense with protein structure as input indeed it may not to be a trivial task framing as a representation learning improved in the updated paper b convolution still stands the point cloud convs are not a very good comparison since there is no graph structure there c pooled coarsened graph stages thanks for the pointer to end of sec5 positioning wrt message passing the paragraph is a big improvement removing some claims about oversmoothing however re the message passing function is learned this is still very much within the default mpnn framework from gilmer et al altogether the whole method would still be much better framed as a graphbased network rather than shoehorning this into a description of a single convolutional operator this will allow a proper discussion of what is currently the end of sec5 where the graph does not correspond to an atomlevel graph anymore rather they now correspond to amino acid or coarser level graphs it is confusing that this coarser graph stages are so briefly glossed over the citation to can also be understood in a message passing framework kipf welling 2017 is off should be gilmer et al 2017 httpsarxivorgabs170401212 in conclusion i am raising my score from 45 leaning towards 6 there is a lot of good work in this paper and i would consider the paper a clear accept with the same method and same results if it were thoroughly rewritten based on graph neural networks requiring full atomic structure as input to the method is the major limitation to the application and impact of the methoddocseppros i think the paper is exceptionally wellwritten and the figures are very carefully designed applaud thank you for proper traintestvalidation splits glad there are varying degrees of difficulty with proper heldout sequences i very much appreciate proper comparison to other methods very thorough less important but the model also performs better at these two tasks than any other approach i say this because i believe the field shouldnt always require soa if there is a significant technical advancement cons the authors site oversmoothing for why their convolution operator performs better but provide no direct evidence that this is the case it needs to be noted that this is either a hypothesis or more concrete evaluation of this needs to be performed to make this claim are there any replicates for standard error and ablation studies table 3 blast comparison is weak jackhmmer or hmmer based tools are more appropriate than blast neutral what defines a hydrogen bond this definition is clear to me in secondary structure but seems more loose in tertiary structure in your figures it looks like only carbons oxygens and nitrogens are defined what about hydrogens if hydrogens arent parameterized how do you define hydrogen bonds this may be good to clarify in table 2 does the modification of the architecture change the number of parameters definition of a ball query might be helpful are there any sequences with posttranslational modifications in the dataset if so how are those handleddocsepsummary the authors describe a method to transform 3d protein structures for supervised machine learning their method introduces a convolution operation that considers both the intrinsic distances between atoms as defined by their bond structure and the extrinsic distances as defined by 3d proximity they also introduce interpretable pooling operations developed using known biology of the amino acids overall the method is effective and straightforward to follow due to having avoided unnecessary complexity the figures greatly aid the reader the authors method outperforms a variety of competitive alternatives on protein fold and function classification tasks these are important problems for which the authors model has achieved a significant performance boost i dont see why this model wouldnt work well for any 3d protein structure labels that can be collected they also perform a through ablation analysis to establish the contribution of the various components of their method major comments i wasnt able to understand what the neighborhood ablations represent and how they differ from convolution ablations are the neighbors used for anything other than the convolutions for example covneigh uses only the intrinsic distances similarly to inconvc what makes these different minor comments on page 7 a table 4 is mentioned that doesnt appear to exist i think they mean table 3 ### Summary:
protein molecule structure analysis is an important problem in biology that has recently become of increasing interest in the ml field the paper proposes a new architecture using a new type of convolution and pooling both on euclidean as well as intrinsic representations of the proteins and applies it to several standard tasks in the field overall the reviews were strong with the reviewers commending the authors for an important result on the intersection of biology and ml the reviewers raised the points of weak baselines the authors responded with adding suggested comparison which were not completely satisfactory focus mostly on recent protein literature the reliance of the method on the 3d structure the ac however does not find this as a weakness as there are multiple problems that rely on 3d structure which with recent methods can be predicted computationally rather than experimentally we believe this to be an important paper and thus our recommendation is accept as the ac happens to have expertise in both 3d geometric ml and structural biology heshe would strongly encourage the authors to better do their homework as there have been multiple recent works on convolutional operators on point clouds as well as intrinsic representationbased ml methods for proteins
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary the paper presents rates of convergence for estimating nonparametric functions in besov spaces using deep nns with relu activations the authors show that deep relu networks unlike linear smoothers can achieve minimax optimality moreover they show that in a restricted class of functions called mixed besov spaces there is significantly milder dependence on dimensionality even more interestingly the relu network is able to adapt to the smoothness of the problem while i am not too well versed on the background material my educated guess is that the results are interesting and relevant and that the analysis is technically sound detailed comments my main criticism is that the total rate of convergence estimation error approximation error has not been presented in a transparent way the estimation error takes the form of many similar results in nonparametric statistics but the approximation error is given in terms of the parameters of the network which depends opaquely on the dimension and other smoothness parameters it is not clear which of these terms dominate and consequently how the parameters w l etc should be chosen so as to balance them while the mixed besov spaces enables better bounds the condition appears quite strong in fact the lower bound is better than for traditional holdersobolev classes can you please comment on how th mbesov space compares to holdersobolev classes also can you similiarly define mixed holdersobolev spaces where traditional linear smoothers might achieve minimax optimal results minor defn of holder class you can make this hold for integral beta if you define m to be the smallest integer less than beta eg beta7 m6 imo this is standard in most texts i have seen the authors claim that the approximation error does not depend on the dimensionality needs clarification since n clearly depends on the dimension if i understand correctly the approximation error is in fact becoming smaller with d for mbesov spaces since n is increasing with d and what the authors meant was that the exponential dependnence on d has now been eliminated is this correct other on page 4 what does the curly arrow notation mean given the technical nature of the paper the authors have done a good job with the presentation however in some places the discussion is very equation driven for eg in the 2nd half of page 4 it might help to explain many of the quantities presented in plain words confidence i am reasonably familiar with the nonparametric regression literature but not very versed on the deep learning theory literature i did not read the proofs in detail docsepthis paper makes two contributions first the authors show that function approximation over besov spaces for the family of deep relu networks of a given architecture provide better approximation rates than linear models with the same number of parameters second for this family and this function class they show minimax optimal sample complexity rates for generalization error incurred by optimizing the empirical squared error loss clarity very dense could benefit from considerably more exposition originality afaik original techniques seem to be inspired by a recent paper by montanelli and du 2017 significance unclear pros and cons this is a theory paper that focuses solely on approximation properties of deep networks since there is no discussion of any learning procedure involved i would suggest that the use of the phrase deep learning throughout the paper be revised the paper is dense and somewhat inaccessible presentation could be improved by adding more exposition and comparisons with existing results the generalization bounds in section 4 are given for an ideal estimator which is probably impossible to computedocsepthis paper describes approximation and estimation error bounds for functions in besov spaces using estimators corresponding to deep relu networks the general idea of connecting network parameters such as depth width and sparsity to classical function spaces is interesting and could lead to novel insights into how and why these networks work and under what settings the authors carefully define besov spaces and related literature and overall the paper is clearly written despite these strengths im left with several questions about the results the most critical is this piecewise polynomials are members of the besov spaces of interest and relu networks produce piecewise linear functions how can piecewise linear approximations of piecewise polynomial functions lead to minimax optimal rates the authors analysis is based on cardinal bspline approximations which generally makes sense but it seems like you would need more terms in a superposition of bsplines of order 2 piecewise linear than higher orders to approximate a piecewise polynomial to within a given accuracy the larger number of terms should lead to worse estimation errors which is contrary to the main result of the paper i dont see how to reconcile these ideas a second question is about the context of some broad claims such as that the rates achieved by neural networks cannot be attained by any linear or nonadaptive method regarding linear methods i agree with the author but i feel like this aspect is given undue emphasis the key paper cited for rates for linear methods is the donoho and johnstone wavelet shrinkage paper in which they clearly show that nonlinear nonadaptive wavelet shrinkage estimators do indeed achieve minimax rates within a log factor for besov spaces given this how should i interpret claims like any linearnonlinear approximator with fixed n bases does not achieve the approximation error in some parameter settings such as 0 p 2 r wavelets provide a fixed nbasis and achieve optimal rates for besov spaces is the constraint on p and r a setting in which wavelet optimality breaks down if not then i dont think the claim is correct if so then it would be helpful to understand how relevant this regime for p and r is to practical settings as opposed to being an edge case the work on mixed besov spaces eg tensor product space of 1d besov spaces is a fine result but not surprising a minor note some of the references are strange like citing a 2015 paper for minimax rates for besov spaces that have been known for far longer or a 2003 paper that describes interpolation spaces that were beautifully described in devore 98 it would be appropriate to cite these earlier sources ### Summary:
the paper extends the results in yarotsky 2017 from sobolev spaces to besov spaces stating that once the target function lies in certain besov spaces there exists some deep neural networks with relu activation that approximate the target in the minimax optimal rates such adaptive networks can be found by empirical risk minimization which however is not yet known to be found by sgds etc this gap is the key weakness of applying approximation theory to the study of constructive deep neural networks of certain approximation spaces which lacks algorithmic guarantees the gap is hoped to be filled in future studies despite the incompleteness of approximation theory this paper is still a good solid work based on fact that the majority of reviewers suggest accept 686 with some concerns on the clarity the paper is proposed as probable accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper introduces a method to enhance the global coherence of text generated from language models the proposed method time control under the assumption in the latent space of sentence embeddings the incoherent text can be seen as brownian motion in the latent space in order to enforce a goal to the generated text authors by fixing a start and end to this brownian motion the process of text generation can be modeled as a brownian bridge from this assumption the authors drive a method that consists of three steps 1 training an encoder to map sentences to a latent plan defined as brownian bridge 2 training a decoder to reconstruct sentences from the given context the true encoded vector of the target sentence from planning latent space using the trained encoder 3 at inference time given a start and endpoint a target trajectory of vectors z0 zt zt is sampled and use the decoder to generate a sentence based on this bridge authors run several experiments to 1 evaluate the hypothesis that the encoder can capture local text dynamics using sentence order prediction task 2 evaluate the decoder to generate local incoherent text using the textinfilling task 3 capture global text statistics by measuring the statistics length of wikipedia sections for city articles of the generated text and compare them to the ground truth 4 evaluate the overall coherence of the longgenerated text overall the results look convincing except for some caveats see the areas of enhancement pros the paper is well structured and easy to follow the idea of modeling sentences to a brownian bridge latent space is neat and generic enough to 1 allow for noise given its stochasticity 2 doesnt require explicit domain knowledge for planning well structured experiments sections with 4 rqs and results that confirm each of the hypotheses reproducibility and transparency in reporting of experiments in terms of available source code dataset information details about human evaluation generation examples areas of enhancement questions to authors the information about each of the ablations id bm could be explained better namely the section ablations theres a clear inconsistency in the best tc method between different latent dimensions 8632 in most of the experiments theres at least one of the 3 that is performing drastically worse than the other baselines while theres overall no clear winner i wonder if you have thoughts about this table 5 the vae32 method performs the best overall in wiki section although the tc 16 method has been highlighted as the best is there a reason behind this during the training of the decoder how do you make sure that the decoder uses the information given by the latent plan overall the paper would have benefited from an intrinsic visualization of the latent space to make sure for example that theres no information collapse of the embeddings when dealing with long sentences this could be done by visualizing the planning trajectory difference between coherent and incoherent text the paper introduces a simple method of preserving coherence in language modeling it builds on previous work that tried to implicitly model planning dynamics the introduced solution is effective and general enough to not need domainspecific planning information it is a good paper to accept overall i advise the authors to clarify the information about the used baselines in a more clear manner docsepthis paper proposes a generation from a language model not only from an initial state but also using a goal state instead of brownian motion the authors employ a draw from brownian bridge by designating initial and end states called time control experimental results show the proposed generation from brownian bridge is more natural and coherent for textinfilling task and also preserves text structures both by automatic evaluation and human evaluation this paper proposes a generation from a language model not only from an initial state but also using a goal state instead of brownian motion the authors employ a draw from brownian bridge by designating initial and end states called time control experimental results show the proposed generation from brownian bridge is more natural and coherent for textinfilling task and also preserves text structures both by automatic evaluation and human evaluation using brownian bridge is a very simple and effective idea for text generation my only concern is the range of its applicability while it is far more natural than a simple random walk time control only allows designating the first and last states for generation however in the actual situation it is not always the case for the first and sometimes last sentence should have a designated state first few sentences might constitute just an icebreak and the actual content might start after that more generally it is more desirable that we can condition the generation at arbitrary time in fact i think that this can be done by a conditional draw from a gaussian process since brownian motion corresponds to using an exponential kernel of gp sentence generation from conditional gp would be the way for the future extention of this work anyway this work will surely pave the way for such principled generations minor some tables are located within the main text tables and figures should be placed top or bottom of the paper for readability please use beginfiguret for something like that numerical results in tables can be rendered in a smaller font ie small also i recommend to condense line spacing for tables for readability using usepackagesetspace and beginspacing09 endspacing for example nice attempt for random generation from neural language models using the idea of brownian bridge this work will pave the way for more princpled random generation from language models docsepthis paper proposes to model the evolution of sentences in a document via a stochastic process specifically a brownian bridge process the paper start off by assuming that the generated sequences by autoregressive models like gpt2 follow brownian motion in that they tend to get incoherent and meander in the semantic space this paper aims to reduce this random behavior by pinning the endpoints of the trajectory and model the generation by brownian bridge process instead the key intuition behind this process is that given two endpoints z0 and zt the evolution of z along time t is a gaussian with mean that is some linear combination of z0 and zt this paper models text by training an encoder for sentences x that produces the embedding z by training over triplets x0 xt xt where 0tt that encourages zt to follow brownian bridge dynamics and uses contrastive loss with a negatively sampled xt for training the approach is tested for local coherence long range order sensitivity and generation of long sequences and is compared against ablative and external baselines the proposed approach does lead to learning of embeddings that are obtainable via linear combination and this leads to improved performance on sensitivity to sentence order in documents and document generation this paper has an interesting approach and tackles an important problem of streamlining sequence generation from autoregressive models the experiments show the value of learning a manifold over the latents that have a linear relationship with some stochastic perturbation they provide evidence that learning in such a manner is promising in order to maintain coherence over long text generation however the setting is fairly limited because this approach requires two contextual endpoints the start and finish this is especially underwhelming given that the introduction states that this approach aims to perform emphcontrollable goaloriented generation in my view the setting described and experimented with doesnt reflect this goal for example there are limited experiments with regard to controllable generation or goaloriented generation tasks secondly the assumption that autoregressive generation follows a brownian motion is strong and i would like to see some empirical evidence or theoretical argument supporting this one simple experiment could be to actually try to fit a brownian motion model to a bunch of sequences generated from gpt2 and show that this fitted model is not suitable for naturally occurring text experiment wise my biggest concern is the vae baseline the point of this baseline is to show that for the same setup of brownian bridge process contrastive learning is better than the vae objective but i feel that the vae implementation as described in the appendix does not make the comparison fair due to lack of details in the paper i am assuming that the priors pz0 and pzt are standard gaussians if this is not true then a clarification would ease this concern of mine but assuming this is true the loss basically tries to match the encoder distributions qz0 and qzt obtained by fthetax0 and fthetaxt to the standard gaussian what this means is that there is a pressure to make the 0 and t embeddings similar which is not at all what we want from this bridge process model a more careful instantiation of prior for vae or even learning a time sensitive prior would be a better implementation of the vae baseline table 1 is another concern this experiment basically trains a linear classifier over the encodings to identify if they are inorout of order the proposed approach is naturally suited for this metricclassifier because the encodings at different times are more or less linearly related with some stochasticity however this is not true for the other baselines so i am not sure what is the takeaway message from this experiment also more exposition on the brownian motion baseline would be helpful the current description is not enough to get an idea about what exactly was done for generation and other experiments with this baseline on a related point i dont get why bm for table 2 would be the same as the brownian bridge isnt it the case that brownian motion baseline doesnt get to see emphboth the endpoints if i am mistaken about this then more exposition is required here because i checked both the paper and the appendix carefully for this table 5 shows mixed results more discussion and analysis here would be helpful for clarification please make explicit whether the triplets have a notion of distance or not ie it is sensitive to different value of t depending on which sentence in the middle was sampled from the context i am assuming this is the case but clarification would be helpful also notation in equation 2s denominator is confusing are you summing over all the negative xt overall i think this paper is well motivated and proposes a reasonable solution to improve coherence of model generated text this is supported by ample experiments but i have serious concerns about some of the crucial experiments and baselines that i have detailed in my main review also i think that the paper could be clearer about its contributions and implementation details post rebuttal thanks to the authors for the detailed response addressing many of my concerns my biggest concern about the prior in the vae baseline is somewhat alleviated given that the the authors used different fixed priors for the two settings while this could be improved by having learnable priorsbetter priors i think the current setting makes the experiments reasonably sound i have raised my score docsepthe authors propose to use a brownian bridge process to model global coherence of a long piece of text they show how to train such a model in an encoderdecoder style setup using a contrastive loss to model the brownian bridge dynamics the authors then verify aspects of their model with a series of experiments to show that their model with an underlying generative process outperforms competing approaches on a variety of local and global coherence and generation tasks i really like the main modelling contribution of this paper it is this reviewers personal opinion that to do longform text generation it is not enough to generate tokenbytoken but that some highlevel planning is required and the brownian bridge process model time control tc the authors propose is definitely a good candidate to model the latent drift of discourse indeed papers like 1 already used random walkstyle models to explain properties of word vectors there are some prior works on using structured probabilistic models such as switching latent dynamical systems for text generation 2 which should also be cited the motivation of the model present is clear and the description of how the model is trained is generally clear enough to reimplement it wasnt immediately clear that training the model on triples only is enough to guarantee general brownian bridge dynamics for the entire text trajectory i feel a note should be added to clarify this my other quibble here is with how the model is presented although the general probabilistic model is written down in equation 1 the likelihood function ie the functional form of pzt x0 xt xt is not explicitly written down anywhere which leads to confusing things like the variance of the process sigma2 being used in equation 3 without prior introduction i feel like explicitly writing down the likelihood would make the equations in the paper flow much better i feel the major weakness of this paper is with the experimental sections for various reasons i have objections to each of the experiments which i will go through below the first experiment attempts to show that tc is a better model of local discourse coherence the authors take two sentences from a document k steps apart embeds them and them attempts to predict the sentence ordering from the embeddings they say that for k1 all models considered perform at chance level on all datasets and only show results for k5 and k10 however models trained using the k1 objective such as albert 3 and structbert4 seem to be able to perform the task better than chance so theoretically this should be possible therefore i think the baselines should at least include an albert model to show the performance upper bound on this problem further k5 or even 10 starts meaning the sentences start becoming very far apart 10 dialogue turns is more than enough to complete some of the simple dialogue agent tasks so its questionable whether the model is really modelling local dynamics at this point the second experiment looks at text infilling on the rocstories dataset and use bleu and bleurt to automatically evaluate their models although the bleurt results do not appear to be anywhere in the paper the reported bleu results are really low to the extent that its unclear whether an improvement from 2 to 5 bleu is really meaningful part of the issue is that bleu measures precision which penalises text generation where there are a variety of possible outputs for this reason 2 report rouge results on rocstories which are much better the missing bleurt results would help contextualise model performance here the human evaluation shows the model performs about as well as the ilm baseline from 5 which is ok i guess in addition the table ordering is incredibly confusing table 6 which shows the human evaluation for experiment 2 appears much later in the text after tables for the later experiments it took me a long time to find it can you group the tables a bit better in thematic order the third experiment attempts to measure global text dynamics by measuring length mismatch per section on wikisections its unclear what notion of global text dynamics the authors are referring to there are many theories on discourse coherence of long text and none of them easily map onto a simple measure of section length if the authors simply mean whether the model has learnt a notion of document structure i think it would be better to be more explicit about this showing that finetuned gpt2 cant even replicate the structure of a homogenous document corpus is an interesting negative result the fourth experiment forces models to generate beyond the expected document length by suppressing generation of the eod token im really not a fan of this experiment because i dont even expect tc to perform well on it do the authors just keep on conditioning the decoder on zt and force the model to generate from this at this point the model is just a standard autoregressive model so the modelling contribution should have no effect alternatively do the authors resample zt1 each time the model finishes generating a sentence in which case how do the authors preserve the brownian bridge dynamics conditioning on hitting a target state zt there are a few methodological issues with this experiment a better experiment to run would be to simply ask the human annotators to score texts freely generated from gpt2 and tc for coherence as a measure of how well tc can generate coherent text overall while the experimental section is weak i really believe the core idea of directed brownian dynamics for planning is a cool one and deserves to be shared more widely this is why i recommend acceptance references 1 randwalk a latent variable model approach to word embeddings sanjeev arora et al 2015 2 generating narrative text in a switching dynamical system noah weber et al 2020 3 albert a lite bert for selfsupervised learning of language representations zhenzhong lan et al 2021 4 structbert incorporating language structures into pretraining for deep language understanding wei wang et al 2021 5 enabling language models to fill in the blanks chris donahue et al 2020 post author response nonetheless we think these observations fit well with the intuition our work proposes neighboring sentences are close to each other and act like brownian motion where ordering is difficult to infer and goalorientedness discourse structure emerges on longer stretches of sentences in a document i like this framing currently its implicit in the paper but maybe it can be made more explicit that we expect the larger k results to be better and that this verifies the brownian bridge approach towards modelling text dynamics nonetheless the end arbiter of this task is a human how coherent do the generations sound to a human and we care about at least matching ilm a method developed specifically for textinfilling so its promising that our method performs better andor competitively with ilm on humanbased metrics bleurt and human evaluations in table 6 i think it should be made explicit then that ilm is in effect an upper bound for model performance as it is a model trained specifically to do the task and that matching the performance of ilm is actually a strong result for the tc model so to directly answer the reviewers question we do not condition the decoder on zt and do not resample new latents during generation the model is thus primed to generate much longer text than it was typically exposed to thank you for the clarification we in fact do already ask human annotators to score the generation rf table 7 in this setup we remove the middle section of the generated output as the text is extremely long see figures 36 for examples of the full forced long text generation results i believe the stronger and more realistic human evaluation is to not just evaluate the tail coherence on forced long text generation but instead directly sample from the model naturalistically and evaluate that output using human annotators if tc better captures global coherence this should be visible even in this setting overall i would like to thank the authors for their response many of my concerns have been addressed and i am happy to increase my score interesting modelling contribution to ensure global coherence of generated text the proposed modelling approach could have wide applicability which is why i recommend acceptance ### Summary:
all reviewers found that the proposed lm with brownian motion is interesting and novel several reviewers raised minor concerns about experiments but have been generally resolved by the authors
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors propose a new explanation type called subgoalbased explanations in the setting of suboptimal and nonrobust intelligent decision support systems ids the aim of the explanation type is to serve as a training benefit for users to i determine when to trust that a recommended action is optimal and ii make better decisions in the absence of a recommended action through an enhanced understanding of the task due to prior provided explanations the proposed explanation contributes to improved user task performance and is preferred over other explanation types by users in the study performed the proposed explanation type is intuitively simple and straightforward the objective of the explanation is to guide naive users rather than the domain experts that most of the current explanation types cater to the proposed method does not assume that the ids system is ideal or optimal the approach is measured in four different dimensions i are users able to reject more suboptimal action recommendations ii are users able to make better decisions in case the ids breaks down and becomes unavailable iii do users perform the task better with subgoal explanations lower plan cost iv do users prefer the explanations over other explanation types there are however a few points id like to highlight 1 it would be helpful to the readers if the authors discuss how do contrastive explanations relate to the proposed explanations would it help to have contrastive explanations as a baseline for comparison it would also help to clarify why clc explanations are most relevant 2 is rejecting a suboptimal action always a good decision for the users it is possible that the user ends up making a decision that is worse than the suboptimal recommended action itll be nice to see a study of how many optimal and suboptimal decisions different from the ones that the ids suggests a user makes after it rejects the suboptimal action recommended by the ids with subgoal explanations 3 in the restaurant game planner description i suggest adding the reasons for why a horizon of 35 was prefixed 4 from the explanations it seems that replanning is needed to generate explanations if the user decides to reject the recommended action is that correct or does it replan after every action performed by the user should replanning cost maybe in terms of time be added to the overall cost for the task 5 a discussion on how rejected or accepted recommended actions by the user relate to the trust of the user on the ids would be interesting 6 did the participants involved in the study know the probability of the ids recommendations being accurate it would be helpful to conduct studies with ids systems with different accuracies currently only done with 85 9 in future work itll be interesting to see how subgoal explanations will perform in domains with deadends or reversible actions consider a situation where the user rejects the optimal action suggested by ids assuming it to be suboptimal and performs a worse suboptimal action this worse suboptimal action may either be impossible to recover from or may require reversible actions or reperforming the actions in the correct order can subgoal explanations also output a confidence level that will help a user to identify such critical actions and trust the ids recommended action more there are minor grammatical mistakes that need to be corrected for easy readability and to avoid confusion 1 the notation for pvalue in empirical evaluation coincides with the notation for probability p with which a recommended action is changed to a suboptimal action 2 in figure 2 the meaning of asterisks needs to be added to the caption 3 in the abstract a suboptimal actions suboptimal actions 4 in the introduction in the 3rd paragraph and 2nd line characters characteristics 5 in the introduction in the 4th paragraph and 3rd line the task performance is not negatively impacted by the sudden absence of the previously available recommendations the task performance is not negatively impacted by the sudden absence of the recommendations previously available should be removed as the previous recommendations are available but it is the current recommendations that become unavailable 6 in the paragraph above related work broadly applicable across are broadly applicable across 7 in figure 1 caption the statement that the planner will replan for a new plan gives the wrong notion that it replans as soon as the user rejects the recommended action however the replanning occurs after the user has performed an action even though different from the one recommended by ids the statement can be modified to the planner will replan for a new plan for subsequent action suggestions 8 in the planning problem definition the notation for model m is different for the transition function mathcal notation is not used 9 in the paragraph below hypothesis 1 specifically that with the aid specifically we hypothesize that with the aid 10 in the paragraph above restaurant game planner in the last second line given a recommendation the user can choose either to conform 11 figure 2 caption has errors user optimal action conformance and suboptimal action avoidance percentages for participants that received the three types of explanations from suboptimal ids systems 12 i suggest changing the word condition to study condition throughout the paperdocsepthe paper suggests explaining planning decisions by indicating the subgoal that the action aims to satisfy the approach has been implemented and evaluated in a userstudy comparing both objective performance and participant preferences against baselines prior approach and no explanation the evaluation also explores settings where the plan is optimalsuboptimal results show some benefits of the proposed approach one limitation of the current approach which is acknowledged in the paper is that the subgoals are predefined in the planning domain an interesting avenue for future work would be to explore subgoals at different granularity levels and perhaps allow the user to explore the hierarchy interactively the paper is wellwritten and the work seems both novel and fairly mature i think this work will make for an interesting discussion in the workshop and would be happy to see it presented ### Summary:
the paper proposes a novel approach for generating explanations for suboptimal ids systems both reviewers agree that that the methods described are technically sound and the paper is in a fairly mature stage it would be a valuable addition to the workshop as you move forward we suggest you take into account the reviewers comments especially those of reviewer 1 as they highlight some interesting points thank you for submitting to the workshop we are looking forward to your presentation
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors introduce two strategies for predicting interatomic potentials the first one is based on label augmentation in this case an auxiliary training is performed to classify the bestperforming physicsbased empirical interatomic potentials eip for a given atomic configuration if a given configuration c results in an a reliable energy e there is also a label for no good classification then the pair ce is appended to the dfte training set this strategy yields a performance boost from 18 to 51 the second method is based on transfer learning where a nn is trained using eip alone and then finetuned based on dft energies in this case the improvements are from 18 to 26 strengths the paper proposes a neural network potential based on physicsbased eip and dftenergy labelled dataset the novelty of the manuscript is exploiting this multifidelity data using label augmentation a novel approach for neuralnetwork potentials also the increase in the performance is obtained with little cost in the computational load mainly the auxiliary training weaknesses 1 there is no comment on how this potential generalizes outside the selected species 2 the methodology is limited to only two singlespecie materials 2 the paper is overall wellpresented but it lacks clarity is several aspects see below for more details the authors state that their work is limited to singlespecie materials si and al in this case and that future work will possible include multispecies ones i believe limitations are adequately addressed docsepthis paper propose to inject the domain knowledge in empirical interatomic potentials eips into neural networks by using the data generated by eips eips is much faster than dft and reasonably accurate however multiple eips may be applicable which eip to trust their accuracy varies in the configuration space when should we trust eips two strategies are presented la label augmentation semisupervised learning a classifier is trained to jointly handle the two issues above it predicts the bestperforming eip for a given configuration which eip to trust if none is sufficiently accurate outputs a dummy indicator when should we trust eips this classifier is trained on configurations where dft data is available how much can it be generalized to unseen configurations this classifier is then used to augment data using the predicted bestperforming eip on each configuration this builds an augmented dataset consisting of both eip samples and dft samples as eip labels are less accurate a tukey loss is used as its less sensitive to outliers as its capped mse is used if the label comes from dft mp multitask pretraining transfer learning instead of only using the label from the predicted bestperforming eip this strategy use all eips labels in pretraining in a multitask way then dft data is used to finetune the model the strategies are then demonstrated with two typical model backbones soapnet as descriptors mlp and schnet as xyz gnn on two material datasets kimsi for silicon and anial for aluminum both datasets are energy prediction tasks both strategies are shown to be able to reduce mae and combining mp and la can achieve even more improvement the main contribution is the two strategies proposed and demonstrated to help neural networks with eips data which is usually much cheaper than dft strengths they propose two strategies shown to improve neural networks with eips data which is usually much cheaper than dft they demonstration with both descriptors mlp and xyz gnn models these two cases are very representative and therefore support the significance of their results several designs are reasonable eg dummy eip tukey loss weaknesses the sensitivity of the performance to the selected set of eips 8 for silicon and 10 for aluminum as in the paper is not clearly addressed how will the performance change if only 2 or 3 eips are selected whats the trend of model performance as the number of eips increases can be more sound if the author compare the performance between the learned models vs the eips used to build dataset this can be evaluated for configurations with dft labels some typos line15 bolster booster line165 dft and eip energies dft energies see weakness the sensitivity of the performance to the selected set of eips is not clearly addressed docsepin this manuscript the authors proposed to incorporate domain knowledge into machine learning empirical interatomic potentials with two techniques a weakly supervised learning based on auxiliary classifiers and a pretrainingfinetuning mode based on transfer learning their experiment results have shown a comprehensive outperformance over the baseline methods on systems with a single atomic species on the strengths side the presented method attempts to solve an import problem by leveraging unlabeled training instances generated from eips the paper is well written and easy to follow the main ideas are clearly explained and the empirical evaluation protocols and results are well presented however both technical and theoretical contributions are inadequate the auxiliary task modules and pretraining strategies have been widely applied to the gnns albeit effective these techniques are quite straightforward and lack technical motivation and insights furthermore the evaluation of the experiment is weak the authors chose two old methods soapnet in 2013 and schnet in 2017 for validation and cannot show that these strategies are still valid in the current sota frameworks eg dimnet arxiv200303123 and gemnet arxiv210608903 the evaluation conducted only on the single atom system also raises serious concerns about its generalization ability overall i think this work may be more appropriate for a journal in computational chemistry rather than a machine learning conference i did not find any potential negative societal impact docsepthis paper proposes to improve the prediction performance of the expensively computed material energy from density functional theory dft by making use of the cheaply computed material energy from empirical interatomic potentials eip two strategies are proposed to use eip based labels including label augmentation and multitask pretraining experimental results show that these two strategies can both improve the prediction performance of two models strengths this work gives a very meaningful exploration of improving the neural network model for dft computation prediction with large amounts of eip labeled data given that dft labeled data is limited the success of the proposed method can motivate researchers working on developing machine learning models for dft prediction to consider using cheap data source to improve their models the writing and organization of the paper is clear and easy to follow weaknesses in experiments soapnet and schnet are used as prediction models however they are designed for molecules not the best models for material data it is better to use existing material property prediction models such as cgcnn 1 in experiments necessary description about how label augmentation and multitask pretraining strategies are combined in experiments is lacking 1 xie tian and jeffrey c grossman crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties physical review letters 12014 2018 145301 na ### Summary:
this paper proposes two strategies for injecting domain knowledge into neural networks for predicting material propreties these strategies lead to substantial accuracy gains all reviewers had positive feedback on the paper and their suggestions helped improving the paper and the experiments accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper aims to facilitate feature learning in nn models by exploiting more from reliable examples this is very similar to selfpaced learning where the model learns from the easier samples at first and proceeds to learn from difficult and challenging samples the authors should discuss their difference with selfpaced learning the method is positioned as a general one for feature learning i do not know the reason why the authors only apply for object detection on a very specific dataset it is expected to see whether the proposed method is also effective for image classification more datasets for evaluation are needed even only for the object detection application docsepoverview the authors tackle the problem of detecting smalllow resolution objects in an image their key idea is that detecting bigger objects is an easier task and can be used to guide the detection of smaller objects this is done using the feature intertwiner which consists of two branches one for the larger objects more reliable set that is also easier to detect and one for the smaller objects less reliable set the second branch contains a makeup layer learned during training which acts as the guidance from the more reliable set that helps compensate details needed for detection the authors define a class buffer that contains representative elements of object features from the reliable set for every category scale and an intertwiner loss that computes the l2 loss between the features from the less reliable set the class buffer they also use an optimal transport procedure with a sinkhorn divergence loss between object features from both sets the overall loss of the system is now a sum of the detection loss the intertwiner loss and the optimal transport loss they evaluate their model on the coco object detection challenge showing stateoftheart performance they also provide thorough ablation analysis of various design choices the qualitative result in fig1 showing well clustered features for both high low resolution objects via tsne is a nice touch comments clarity the paper is well written and easy to follow originality significance the paper tackles an important problem and provides a novel solution quality the paper is complete in that it tackles an important problem provides a novel solution and demonstrates via thorough experiments the improvement achieved using their approach questions 1 the class buffer seems very restricted in having a single element per object category per scale to represent all features the advantage of forcing such a representation is tight clustering in the feature space but wouldnt a dictionary approach with multiple elements give more flexibility to the model and learn a richer feature representation at the cost of notsogood clustering 2 any comment on why you drop performance for couch and baseball bat bedroll 3 in table 4 of appendix where you compare with more object detection results i find it interesting that mask rcnn updated results has a might higher aps 435 compared to you 272 and everyone else i was expecting you to be the best under that metric due to the explicit design for small objects they maskrcnn updated results are also significantly better than the rest under apm but worse under apl can you explain this behavior is the resnext backbone that much better for small objects docsepthis paper proposes a novel approach with the hypothesis that the reliable features can guide the less reliable ones this approach is applied to the object detection task and show consistent performance improvements pros this paper is wellwritten and easy to follow the base idea that divides the learned features into two sets the reliable feature set and the less reliable one is very interesting and looks novel plus the hypothesis which is that reliable features can guide the features in the less reliable set is also interesting the performance improvements are quite large extensive ablative studies are provided to support the proposed method well cons the method of obtaining the representative in buffer b is not clearly presented the overall training and inference procedure are not clearly presented some notations and descriptions are vague and confusing more than two datasets are necessary to show the effectiveness of the methods comments what is the higher level feature map pm and how did you choose the higher level feature map at the mth level in option b and c in section 33 what is the meaning of the past features in section 32 it is better to show the exact architecture of the makeup module and the critic module can this method apply to the other backbones such as vgg or resnets without fpn the sentences at the bottom of p4 starting with note that only looks ambiguous fcriticj may be the jth element of fcritic please denote what fcriticj stands for even if the paper needs to be revised for better readability i think this paper is above the standard of iclr because the idea is interesting and novel furthermore the experimental studies are properly designed and well support the main idea i am leaning toward acceptance but i would like to see the other reviewers comments ### Summary:
the paper proposes an interesting idea using reliable samples to guide the learning of less reliable samples the experimental results and detailed analysis show clear improvement in object detection especially small objects on the weak side the paper seems to focus quite heavily on the object detection problem and how to divide the data into reliablelessreliable samples is domainspecific it makes sense for object detection tasks but its unclear how to do this for general scenarios as the authors promise it will make more sense to change the title to feature intertwiner for object detection to alleviate such criticisms given this said i think this paper is over the acceptance threshold and would be of interest to many researchers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: let me note that i have very little expertise in quantization and so cannot really judge the significance of such contributions i am however very familiar with the gnn literature summary a method to train gnns such that later quantization works well is presented the authors first analyse the message passing definition to identify those computation steps whose results show the largest variance and hence suffers most from the imprecision introduced by quantization consequently hey focus on the message aggregation phase of message passing they then propose two improvements to more standard quantizationaware training qat 1 applying quantization during the forward pass only on message aggregation outputs and doing it more often on nodes that receive many messages and 2 using percentilebased statistics for determining the ranges of values considered during quantization finally experiments show that the resulting training procedure works well for gnns on a number of datasets matching or slightly improving the baseline performances in most cases the proposed degreequant method also outperforms baseline qat methods strongweak points empirical results show moderate gains over the baseline qat methods for int8 quantization and substantial gains for very coarse quantization to int4 thoughtful experimental ablations study the effect of the two improvements separately and further empirically verify the theoretical analysis of sources of errors the paper is not selfcontained and hence not easily readable for people without background knowledge in quantization while gnns are fully though very densely defined in sect 21 no technical details on quantization are provided in sect 22 i ended up skimming some of the cited papers to even understand how values are practically mapped between fp32 and int8 consequently sect 32 is discussing extensions and alternatives to concepts that are simply not explained in the paper recommendation i think this paper can be accepted and would be useful for the very narrow segment of people interested and knowledgeable in gnns and quantization however in the current form it is inaccessible to a wider audience and i believe that it could be significantly improved in that regard questions 1 message aggregation is identified as a key source of quantization error due to the variance in the number of messages for graphlevel tasks such as mnist cifar and zinc the aggregation of node representations to a graph representation should lead to a similar problem do you have deeper analysis on this aspect detail feedback sect 31 end the mixing of gcn and gin is somehow confusing and it would be wortwhile to restructure this ie mathbfygini is defined before the equation its used in but mathbfygcni after etc sect 32 alg 1 i found the use of mask masking here highly confusing as i associate it with removing a value as in masking of loss components dropout masks hiding a human face behind a cat mask but here the semantics is inverted masks determine which values are more visible by not applying the quantization to them unless this term is already in standard related use in the quantization literature i would strongly recommend to use a different term here eg preserved protected fig 56 are not readable for colorblind peopledocsepthe authors propose a new technique for quantization aware training of neural networks that is specially suited for graph neural networks they do a good job of motivating the problem by demonstrating that the large variation of input degree in gnns can lead to unique challenges for numerical precision forcing a compromise between truncation error and rounding error th proposed technique incorporates stochastic masking and quantization proportional to the input degree to allow higher inputdegree nodes to operate at higher resolution on average the authors demonstrate strong improvements over quantization aware training that treats all nodes equally achieving relatively small drops in accuracy for a large compression and speedup of gnn inference the work is presented in a straightforward and clear manner with clear applications to important problems two small things that could improve the paper percentile tracking is a component to the methods but relies on a reference for full explanation a more precise statement of this part of the method in the paper itself would help clarify for readers minor nit but some acronyms are used before they are defined such as gcn docsepedit sorry the previous review was for a different paper that ended up in here due to a copypaste issue this paper uses quantization and quantization aware training qat to improve the speed performance of gnn inference for three types of gnn modes gin gcn and gat the paper identifies the aggregation step to be where quantization introduces the most numerical error and use stochastic masking and clipping the topbottom values to mitigate the issue this topic is very relevant and interesting and novel to the best of my knowledgealthough im not familiar with the literature surrounding quantized neural networks there are places where the writing can be more careful for example in the abstract the authors write little research exploring methods to make gnn more efficient at inference time however there has been research focusing on both hardware acceleration 1 and making gnn models smaller 2 quantization isnt the only approach to make gnn inference faster claims like it is not possible to deploy this technique on smartphones from intro paragraph 2 should be supported since its difficult for a reader to verify such a claim some of the claims like the one bolded in table 1 should be in the abstract im not sure if this is typical in the quantization literature but a wallclock time comparison would be useful in table 2 to compare the time speedup against the baseline one other presentation feedback in figure 1 the xaxis is not continuous a line chart is not appropriate since the slope of the line segments in the chart is meaningless removing the lines connecting the dots would make more sense 1 zeng and prasanna 2020 graphact accelerating gcn training on cpufpga heterogeneous platform httpsarxivorgabs200102498 2 yan et al 2020 tinygnn learning efficient graph neural networks httpsdlacmorgdoiabs10114533944863403236 ### Summary:
the paper presents a quantization aware training method for gnns the problem is very well motivated the method is wellexecuted and experiments are also well designed the paper does seem relatively low on technical novelty all the reviewers are positive about the paper and the paper has certainly improved significantly over the rebuttal phase so we would like to see the paper accepted at iclr
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper addresses a timely topic for identification of samples which are outofdistribution at test time the approach is evaluated for a segmentation task and delivers promising results their method is compared to other stateoftheart ood methods 1 as far as i understood eg fig 2 the proxy task is learned together with the task of interest segmentation i am wondering how this effects the performance of the segmentation network g itself in comparison to isolated training of the segmentation network does the performance degrade when ood is an additional goal 2 it is known that segmentation networks for lv perform very well on center slices but have problems in the apex and base regions because such regions look very different from the majority of slices have you used 2d or 3d segmentation networks is the ood detection based on 2d or 3d samples your ood detection might be too sensitive towards these regions please add additional result descriptions showcasing the performance of ood in center slices vs apex vs base 3 from the description in sect 4 the evaluation strategy using these 3 dat sets is not completely clear to me it would be helpful to provide an illustration showing the crossvalidation and the respective id and ood settings 4 in the lssc segmentation loss is not mentioned 5 the narrative of the paper is easy to follow however some statements are confusing and should be changed require no modification in the network architecture or training procedure p2 there might be no changes necessary to a segmentation network itself yes but the presented approach might degrade the performance of the target task see point 1 i think changing the loss such as in eqn 2 actually changes the training procedure selfsupervised tasks do not require manual annotations and so the performance in test samples can be assessedp2 this statement is not clear to me unlike current stateoftheart the proposed approach does not require the use of a speciffic proxy task or training the model with the explicit goal of ood detection across three cmr datasets and for two different proxy tasks p2 first you argument that you do not need a proxy task and then you are actually using 2 proxy tasks i guess you want to highlight that you do not require additional labels for the proxy tasks similar statements can be found at the end of the related work fig 1 from which dataset is the ground truth segmentation i recommend using violin plots instead of boxplots for the results please indicate mean and median in the violin plots 6 what is the difference between mm data sets and sunnybrook does sunnybrook data come from a different vendor than the two mms which would qualify it for such an ood analysis docsepthe paper is wellstructured and wellwritten this suggests its maturity literature review is decent and points at the right direction methods are clearly explained and easy to understand experiments include 3 datasets with different manufacturers and sources promising results with potential developments for other domains the segmentation networks here benefit from two auxiliary losses or selfsupervision tasks one is based on contrastive learning and the other on edge detection my intuitive understanding is that edge detection provides a template for the segmentation branch to fill in contrastive task however makes it robust to certain types of transformations while reinforcing the representational capabilities of the network considering the overarching goal of the study which is detecting ood examples contrastive task makes more sense to me the other one also seems to work but maybe i like the contrastive one better anyways the results show that the contrastive loss works preferably better too so my question is along the lines of the contrastive learning the contrastive learning algorithm typically require large minibatches or memory banks to exploit the similarities between a good amount of data points these in practice can trick implementations however there is no information regarding these aspects in the current work did you implement something new do you have an interesting design to share if you used an existing solution what was that any modifications or special tricks played in this study i can see that you have space limitations here but 12 pages in appendix could help in addition training on images from vendor a seems to allow for better generalization overall i also checked appendix b what is special about vendor a what are the main differences from vendor b i am not asking you to give names but some comparison could help us understand what is going on better for instance old vs new machinetechnology image resolution technical capacities of operators procedural differences 3fold cross validation in general dnns exhibit a great deal of diversity in their function estimations due to various factors such as optimisation trajectories randomness in data shuffling and augmentation etc using a larger number of folds eg 10 would allow for a better established results and improve the trust in findings here finally i need to ask an obvious question did you consider using both contrastive and edge detection tasks together it seems like these could be combined into the same network architecture these tasks could complement each other possibly can you speculate a bit in this regard what would happen if this was implemented did you have any concerns for not implementing it this way i am just curious docsep the paper is easy to read and interesting the paper addresses the very relevant topic of ood sample detection the authors provide an extensive literature review the authors method description is detailed and mostly clear the authors provide extensive results with comparisons to multiple other methods the authors highlight both advantages and limitations of their methods the authors description for data splitting and evaluation 4 is not entirely clear which dataset is used in which fold which datasets are used for hyperparameter tuning and which datasets are used for reporting results the current description sounds like validation and test results are averaged which the authors probably did not do the authors should be moderate with their claim of novelty the authors cite the paper hendrycks 2019 which employs a selfsupervised rotation loss which is also used for ood scoring i recommend to explicitly state that selfsupervised losses have been used for ood scoring and that the authors adopt this approach for cmr segmentation see end of introduction docsepinteresting use of an edge detector as a proxy task to detect outliers methods evaluated on multiple external datasets presentation unclear the first time that the proposed method is stated in details is page 4 not clear how ood is defined in the experiments ### Summary:
the paper proposes a method to indicate when a test sample differs from those in the training distribution thus by detecting such ood test cases the proposed method aims to raise a flag that the learned method cannot be used on such ood test data as the reviewers have pointed out this is an important limitation of current methods and needs addressing while i am positive about the paper after the rebuttal stage some issues raised by the reviewers still remain the methodology relies on existing methods combining voxellevel uncertainty estimation with the value of the selfsupervision loss empirical analysis doesnt employ existing methods for outliernovelty detection for which a lot of literature exists both without and with deep learning the employed baselines are standard dnnbased methods for the selfsupervised task of edge detection the method relies on edge locations that is a very laborious task
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: i really liked this paper and believe it could be useful to many practitioners of nlp conversational ml and sequential learning who may find themselves somewhat lost in the everexpanding field of dynamic neural networks although the format of the paper is seemingly unusual it may feel like reading a survey at first the authors propose a concise and pedagogical presentation of jordan networks lstm neural stacks and neural rams while drawing connections between these different model families the cornerstone of the analysis of the paper resides in the taxonomy presented in figure 5 which i believe should be presented on the front page of the paper the taxonomy is justified by a thorough theoretical analysis which may be found in appendix the authors put the taxonomy to use on synthetic and real data sets although the data set taxonomy is less novel it is indeed insightful to go back to a classification of grammatical complexity and structure so as to enable a clearer thinking about sequential learning tasks an analysis of sentiment analysis and question answering task is conducted which relates the properties of sequences in those datasets to the neural network taxonomy the authors devised in each experiment the choice of nn recommended by the taxonomy gives the best performance among the other elements presented in the taxonomy strength o the paper is thorough and the appendix presents all experiments in detail o the taxonomy is clearly a novel valuable contribution o the survey aspect of the paper is also a strength as it consolidates the readers understanding of the families of dynamic nns under consideration weaknesses o the taxonomy presented in the paper relies on an analysis of what the architectures can do not what they can learn i believe the authors should acknowledge that the presence of long range dependence in sequences is still hard to capture by dynamic neural networks in particular rnns and that alternate analysis have been proposed to understand the impact of the presence of such long range dependence in the data on sequential learning i believe that mentioning this issue along with older httpaidinfounifiitpaolopstnn94gradientpdf and more recent eg httpproceedingsmlrpressv84belletti18abelletti18apdf and httpsarxivorgpdf180300144pdf papers on the topic is necessary for the paper to present a holistic view of the matter at hand o the arguments given in 52 are not most convincing and could benefit from a more thorough exposition in particular for the sentiment analysis task it is not clear enough in my view that it is true that since the goal is to classify the emotional tone as either 1 or 0 the specific contents of the text are not very important here one could argue that a single word in a sentence can change its meaning and sentiment o the written could be more polished as a practitioner using rnns daily i find this paper exciting as an attempt to conceptualize both data set properties and dynamic neural network families i believe that the authors should address the shortcomings i think hinder the papers arguments and exposition of preexisting work on the analysis of dynamic neural networksdocsepsummary the paper analyses the taxonomy over memorybased neural networks in the decreasing order of capacity neural ram to neural stack neural stack to lstm and lstm to vanilla rnn the experiments with synthetic and nlp datasets demonstrate the benefits of using models that fit with task types comment overall the paper is well written and presents interesting analysis of different memory architectures however the contribution is rather limited the proposed taxonomy is not new it is a little bit obvious and mentioned before in 1 unfortunately this was not cited in the manuscript the theorems on inclusion relationship are also obvious and the main contribution of the paper is to formally show that in mathematical forms the experiments on synthetic tasks give some insights into the models operations yet similar analyses can be found in 2 3 to verify the models really learn the task the authors should include tests on unseen sequence lengths there remains questions unexplained in nlp tasks such as why multislot memory did not show more advantages in movie review and why neural stack performed worse than lstm in babi data minor potential errors in eq 6 rt1 should be rt the lstm presented in section 32 is not the common one normally there should be xt term in eq 3 and htgottanhrt in eq 6 the author should follow the common lstm formulas which may lead to different proofs or include reference to their lstm version 1 yogatama et al memory architectures in recurrent neural network language models iclr18 2 joulin et al inferring algorithmic patterns with stackaugmented recurrent nets nips15 3 graves et al neural turing machines arxiv preprint arxiv14105401 2014 docsepthe authors propose a reviewstyle overview of memory systems within neural networks from simple rnns to stackbased memory architectures and ntm memnetstyle architectures they propose some reductions to imply how one model can be used or modify to simulate another they then make predictions about which type of models should be best on different types of tasks unfortunately i did not find the paper particularly well written and the taxonomy was not illuminating for me i actually felt in the endeavor of creating a simple taxonomy the authors have created confusing simplifications eg lstm state memory and memory of a single external event to me is misleading as we know an lstm can compress many external events into its hidden units furthermore the taxonomy did not provide me with any new insights or display a prediction that was actually clairvoyant ie it was clear from the outset that a memory network say will be much better at babi than a stackaugmented neural network it would be more interesting to me for example if the paper could thus formalize why ntms dncs say do not outperform lstms at language modeling for example i found the reductions somewhat shady eg the ram simulation of a stack is possible however the model could only learn the proposed reduction if the number of write heads was equal to the number of memory slots or unless it had on thinking steps per time step where n is the number of memory slots so its not a very realistic reduction you would never see a memory network for example simulating a stack due to the fixed writeoneslotpertimestep interface nit im not sure the authors should be saying they developed four synthetic tasks when many of these tasks have previously been proposed and published counting copy reverse copy ### Summary:
this paper presents a taxonomic study of neural network architectures focussing on those which seek to map onto different part of the hierarchy of models of computation dfas pdas etc the paper splits between defining the taxonomy and comparing its elements on synthetic and nlp tasks in fact babi which is also synthetic im a fairly biased assessor of this sort of paper as i generally like this topical area and think there is a need for more work of this nature in our field i welcome and believe the cfp calls for papers like this learning representations of outputs or structured states theoretical issues in deep learning however despite my personal enthusiasm the reviews tell a different story the scores for this paper are all over the place and thats after some attempt at harmonisation i am satisfied that the authors have had a fair shot at defending their paper and that the reviewers have engaged with the discussion process im afraid the emerging consensus still seems to be in favour of rejection despite my own views im not comfortable bumping it up into acceptance territory on the basis of this assessment reviewer 1 is the only enthusiastic proponent of the paper but their statement of support for the paper has done little to sway the others the arguments by reviewer 3 specifically are quite salient it is important to seek informative and useful taxonomies of the sort presented in this work but they must have practical utility from reading the paper i share some of this reviewers concerns while it is clear to me what use there is the production of studies of the sort presented in this paper it is not immediately clear what the utility of this study is would i practically speaking be able to make an informed choice as to what model class to attempt for a problem that wouldnt be indistinguishable from common approaches eg start simple add complexity i am afraid i agree with this reviewer that i would not my conclusion is that there is not a strong consensus for accepting the paper while i wouldnt mind seeing this work presented at the conference but due to the competitive nature of the paper selection process im afraid the line must be drawn somewhere i do look forward to rereading this paper after the authors have had a chance to improve and expand upon it
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper aims to provide theoretical understanding for contrastive learning where similar pairs of points x and x are encouraged to have similar representations through an infonce inspired objective function some prior works show the benefit of learned representations for linearly classifying downstream classes by making conditional independence like assumption on the similar pairs or positive samples ie x and x are approximately conditionally independent given downstream label y this work argues that these assumptions are quite strong for contrastive learning with data augmentations and aims to show guarantees under the following weaker and more realistic assumption support of augmentation distribution of different inputs from the same class overlap to form a connected graph of inputs within a class whereas support of augmentations of inputs from different classes do not overlap lower and upper bounds using this and some other assumptions connecting the downstream performance of representation function to the contrastive loss some simulation experiments are presented to support some aspects of the theoretical analysis using the insights from the analysis the paper proposes an average confusion ratio acr metric that can be used to predict the ranking of downstream performances of different augmentations using only unlabeled data experimental evidence is provided on cifar and stl datasets to verify the efficacy of this metric for some practical augmentations while there are some interesting aspects in the paper especially the acr metric the theoretical analysis seems to have raised many questions and concerns that i have summarized below details in main review soundness of assumptions assumption 46 which is crucial seems questionable and may not be coherent or appropriate to make in this setting more on this in point w2 of main review deeper dive into theoretical results there is a lack of discussion about the nonvacuousness of the bounds in the main results theorem 42 and 48 that puts the interpretation and significance of the result in question more on this and related issues in point w2 of main review comparison to prior work the work of haochen et al in particular is not adequately compared to especially since some of the points being addressed here are covered through a different kind of analysis in that paper more on this in point w3 of main review strengths s1 the problem being addressed is very relevant contrastive learning has enjoyed a lot of empirical success and various works on theoretically understanding lack in one of many aspects when it comes to closeness to practice this paper addresses issues with the theoretical assumptions and results in many prior work s2 theorem 42 that upper bounds the downstream classification loss without conditional independence is new and interesting the acr metric that can select good augmentations using just unlabeled data is also an interesting finding s2 various parts of the paper are accompanied with experiments simulations and on standard datasets to relate the theoretical analysis to practice s3 paper is clearly written and easy to follow weaknesses here are many concerns about the theoretical assumptions and results that would help to have addressed by the authors w1 assumption 46 one of the main concerns is the perfect alignment assumption which assumes that the optimal solution f of the nce loss will satisfy fx fx for all positive samples x and x this seems like an unnatural assumption to make directly on the optimal solution and is implicitly an assumption on the distribution of positive samples px x since the optimal unit norm representations that minimizes that infonce loss depends strongly on this distribution while some arguments for perfect alignment have been made in prior work 3 it is not clear whether that can be coherently imported here as an assumption in fact it is quite likely that the optimal infonce solution will not satisfy this assumption exactly for most joint distributions px x at the very least a lot more justification is needed this benign looking assumption undercuts the point that results here are shown under less restrictive assumptions compared to prior work and it kind of trivializes the result in theorem 48 note that the concern here is not just that the assumption is too strong or unrealistic which is often unavoidable and acceptable but that its not clear when the assumption can even be true and whether or not it is mathematically compatible with the rest of the setting w2 nonvacuousness of bounds i found theorem 42 interesting since it can show a bound similar to the bound from 1 but without the conditional independence one discussion i found missing is about how vacuousnonvacuous the upper bound can be since the upper bound looks like mathcallncef logmk sqrttextvarfx y it is not entirely clear whether this bound can ever be nonvacuous ie are there cases where the sum of these terms can be very small for eg in theorem 48 where textvarfx y 0 i can estimate a rough lower bound on this upper bound of logmk m11ke2 logmk approx log1 k1e2 which can be large for a large value of k here i used fx 1 a discussion about the vacuousness or not of the bound can be critical in understanding whether the bound is indeed meaningful a side note given theorem 42 and proposition 47 theorem 48 just seems like a corollary rather than a theorem w3 the result in 2 does not need conditional independence kind of assumption and in fact does analyze a more general case albeit for a different spectral version of the contrastive loss in particular assumption 41 from this paper will lead to alpha0 from that paper and assumption 45 from this paper will lead to reasonably high value for the dirichlet conductance rhok that shows up in their bound given that their results for spectral contrastive learning hold for the setting being considered in this paper it is worth making a more detailed comparison to that paper other comments and questions section 51 seems to have some potentially interesting hypersphere example to demonstrate many of the points but i thought it was not discussed enough in the main paper it would help to give a short and clear summary of the results in section b in the main paper some statements made in the paper deserve much more justification or could be toned down eg the class collision terms that are incompressible in saunshi et al 2019 now disappear in our bounds by adopting the infonce loss which also explains why infonce performs better in practice this does not really seem like an explanation for why infonce performs better in practice it is a weak justification at best increasing m indeed leads to a lower approximation error and helps close the gap this is not clear since mathcallncef also depends on m the setting for proposition 31 is not described clearly with regards to what kind of augmentation distributions overlapping or not does it hold for i can only guess that it is for the case where they dont overlap for any pair of inputs so it is not applicable when assumption 45 is satisfied for example some clarification on this would be appreciated assumption 41 says that the conditional label distribution pyx pyx matches for positive samples x and x however this assumption is invoked in many places to say that inputs from different classes do not have overlapping support of augmentations and that the label is deterministic given x or x eg besides because proper data augmentation will not cause interclass support overlap assumption 41 on page 6 perhaps this assumption needs to be modified appropriately or may be a separate assumption is needed about augmentation distributions not overlapping between inputs from different classes the acr metric makes sense and it is interesting that it helps in practice but connection to theory is weaker than it is made out to be after all the theory only talks about the overlap between augmentation distributions but nothing about the nearest neighbors wrt randomly initialized network features or the learned features will help to explain what fjx means in theorem 42 seems like it means that jth coordinate of fx proposition 47 should only be true for f and not all f is chaos used as a technical term if so any citation for its prior usage would be useful to include missing citations 45 theoretically analyze contrastive learning for downstream task 6 reports invertedu shaped curves as in figures 7 and 8 in this paper 1 arora et al a theoretical analysis of contrastive unsupervised representation learning 2019 2 haochen et al provable guarantees for selfsupervised deep learning with spectral contrastive loss 2021 3 wang et al understanding contrastive representation learning through alignment and uniformity on the hypersphere 2020 4 tosh et al contrastive estimation reveals topic posterior information to linear models 2020 5 tosh et al contrastive learning multiview redundancy and linear models 2020 6 tian et al what makes for good views for contrastive learning 2020 the paper aims to provide some theoretical analysis for contrastive representation learning under weaker assumptions than prior work like conditional independence and has some interesting empirical findings about how performance of augmentations can be ranked using a metric that depends just on unlabeled data while the general idea is nice there are issues with the theoretical setup as described in the main review raising questions about the meaningfulness of the assumptions and results furthermore the comparison to prior very relevant work is also inadequate this leads me to assign a score of reject for the current version docsepthe current leading theory of what contrastive losses are doing and why they work interprets contrastive learning as balancing alignment with uniformity as proposed in 2 this paper seeks to augment that understanding of contrastive learning using a new perspective focusing on the role of data augmentation it is wellknown that contrastive learning techniques are highly sensitive to the data augmentation schemes used most notably discussed in 1 in this work the authors interpret augmentation as a way to connect different intraclass images together then the contrastive loss is seen as a way to gradually cluster intraclass samples together by aligning augmented views producing representations that are classseparated even in feature space on top of introducing a new lens with which to understand contrastive learning the authors also provide proofs on performance guarantees as well as a new evaluation metric the metric is inspired by their augmentationoriented understanding and was also found to align well with downstream performance the authors provide a scenario where alignment and uniformity are satisfied but fails to translate well to downstream classification accuracy this suggests to them that the instance discrimination task alone cannot guarantee the learning of classdiscriminative features that would enable better downstream classification and directs their attention to the other important component of contrastivelearning to help explain the story augmentation they then build off the analytical work of 3 to prove guarantees for the downstream performance with a relaxed assumption 1 chen et al a simple framework for contrastive learning of visual representations 2021 2 wang and isola understanding contrastive representation learning through alignment and uniformity on the hypersphere 2020 3 saunshi et al a theoretical analysis of contrastive unsupervised representation learning 2019 in this exploration of data augmentation much emphasis has been placed in the concept of augmentation strength but what about the choices of augmentations themselves can we perhaps use the arc metric to evaluate compare and select data augmentation schemes themselves separately can the arc metric be used to guide the selection of data augmentation parameters for example for an arbitrary given augmentation scheme we can calculate the parameters that would maximize the arc metric in an unsupervised way then would applying those augmentations lead to comparable performance across different choices of augmentation strategies in other words is the arc metric a strongenough metric that supercedes the selection of data augmentation strategies i would like to see more thought analysis and application regarding this new metric to fully convince me of its value and uses additionally to bridge the synthetic scenario and real data i would like to see an augmentation graph of real augmented images drawn with tconnections where t can even be 1 and perhaps varied over different strength parameters i think there is definitely a gap between the authors theoretical proposalsscenarios and that of actual natural data that can be closed with extra effort for example the authors only mention one augmentation scheme to measure augmentation strength in realworld datasets the randomresizedcrop operator and only evaluate it using their proposed metric lastly the reference section appears rather sparse given the massive catalogue of work including theoretical surrounding contrastive learning some typos alone cannot guarantee to learn classdiscriminative should be alone cannot guarantee the learning of classdiscriminative comparing to saunshi while ours only should be compared to saunshi ours only page 6 section 42 and the surrogate could complete its mission should be and the surrogate can complete its mission page 7 section 43 different augmentation strength affects should be different augmentation strengths affect page 9 section 6 we take 500 sample asfor the encoder class we should be we take 500 samples asfor the encoder class we page 16 section d1 the authors expand our understanding of contrastive learning on top of the existing alignment and uniformity perspective by studying the role of data augmentation they provide theoretical guarantees on downstream performance and propose an interesting new metric that can be evaluated using only the given unsupervised data overall i think this is a strong submission and would recommend an accept docsepthe paper proposes a new theory for understanding contrastive representation learning the novelty is the focus on the interplay between alignment and augmentation prior work has identified alignment as one of the factors of contrastive learning but have not investigated how different types of augmentations may affect the learned embeddings this work adds that missing piece the results intuitively make sense showing that proper amount of augmentation that connects samples of the same class has positive effect on downstream classification empirically the authors verify that too weak or too strong augmentation harms performance based on observations the authors define a metric on ratio of positive pairs among nearest embedding neighbors and found the change of this metric throughout training positively correlate with performance strengths theory considering both augmentation and alignment without making too much assumptions empirical verification on the niceness of a proper amount of augmentation the acr and arc metrics characterizes the interplay between augmentation and alignment and are indicative of task performance weaknesses the theoretical results are a bit weak eg as pointed out in paper thm 48 only talks about the minimizer of the contrastive loss maybe this is unavoidable with the current set of augmentations but can there be a version with the perfect alignment assumptions relaxed into approximate alignment if so it might be possible to talk about nonminimizers proposition 31 is incorrect but fixable i think no finite samples can attain uniformity because perfect alignment implies features are concentrated among finite number of vectors implies not a uniform distribution the exact stated form is wrong but i think some variants of it is true figure 6 what is the experiment setting for this sec 51 and when r is too large r3 is r the geodesic distance on sphere or euclidean distance in the ambient space either case it is really large almost containing the entire sphere is there not a milder augmentation that can also show the difference the paper provides a theoretical analysis on the interplay between alignment and augmentations empirical experiments nicely complement the theory and lead to interesting metrics that reveal the properties of this interplay overall the paper is also nicely written while there is one slightly incorrect claim which i think is fixable and some places that would need clarification i think the findings in this paper are valuable to the field thus i recommend acceptance docsepthe authors provided a new understanding of contrastive learning from the perspective of data augmentation for intraclass samples in particular the authors proposed to understand the role of data augmentation as to create certain chaos between intraclass samples so to encourage the clustering of intraclass samples and also the learning of classseparated representations additionally a new metric arc is proposed to evaluate the downstream performance the conclusion is validated via both synthetic and realworld datasets strengths the authors provided a new understanding of contrastive learning from the perspective of data augmentation for intraclass samples moreover to evaluate the effect of data augmentation a quantitative analysis is provided along with a new metric weaknesses theorem 42 for the downstream classification the loss is upper and lower bounded in terms of the lnce loss the authors provided comparison with saunshi et al 2019 from the technical perspective is there any intuitive explanation on how to evaluate the classification performance in terms of contrastive learning loss assumption 45 intraclass connectivity this assumption is strong without the label information it seems impossible to derive such augmentation set please add discussion on the practicality of this assumption and show an example on some datasets if possible proposition 47 based on the proof provided in the appendix the conclusion not only relies on the existence of such augmentation set assumption 45 but also that such augmentation should be applied to intraclass samples ie tixi tjxj this kind of operation is impractical without the label information please add comment on that in the experiments randomresizedcrop is used to illustrate the relationship between aug strength and accarc the best performance for different datasets all achieves at aug strength 092 any comments on that eg in terms of data augmentation for intraclass samples at aug strength 092 in practice there are different kinds of data augmentation eg flipping rotation and scaling the authors only showed results on randomresizedcrop can you show results for other data augmentation types do you have similar conclusion as that for randomresizedcrop different data augmentation types are often used together in practice eg randomly pick two augmentations from the augmentation set for the raw image then how to apply the proposed analysis in such practical case in particular how to measure the aug strength the authors emphasized the importance of the data augmentation design for intraclass samples ie perfect overlapping 1 the study on applying the analysis to existing contrastive learning algorithms is however preliminary only with randomresizedcrop 2 based on the proposed analysis how to find the sweet spot of data augmentation for contrastive learning is crucial but this is not discussed the idea of understanding contrastive learning from the perspective of data augmentation for intraclass samples is interesting however 1 some key assumption for the analysis is too strong 2 the analysis on the existing contrastive learning algorithms is preliminary and needs more work and 3 the authors emphasized the importance of finding the sweet spot of data augmentation ie perfect overlapping but how to achieve that in practice is not discussed ### Summary:
the paper under review provides a theoretical analysis for contrastive representation learning the paper proposes a guarantee on the performance specifically upper and lower bounds without resorting to previously used conditional independence assumptions throughout the theoretical results and assumptions are supported by experiments after a lively discussion and after changes made to the paper in the revision stage all four reviewers recommend this paper for acceptance reviewer twsb appreciates that the paper makes weaker assumptions than prior work ie not assuming conditional independence but raises a number of serious concerns on the theoretical results the review questions whether assumption 46 used in the theory can be true and whether the bound is vacuousness the authors argue that this assumption was used in prior work point out that only some of their results rely on this assumption and that the assumption is compatible with the theory the response of the authors partly resolved the reviewers concern and the reviewer raised their score reviewer btla finds the idea of understanding contrastive learning for intraclass samples interesting but finds some key assumptions too strong a critique similar to that raised by reviewer twsb the authors responded and the reviewer increased their score and mentioned that most concerns were addressed the response partially resolved the reviewers concern and the reviewer now also recommends acceptance i recommend to accept the paper understanding contrastive learning better is an important problem and based on my own reading i agree with the reviewers that the paper contributes to the understanding of contrastive learning two reviewers had concerns about unrealistic assumptions but those have been largely resolved in the discussion
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary in this paper the authors propose to replace commonlyused shootingbased methods for action sequence planning in learned latentspace dynamics models by a collocationbased method they argue that shootingbased methods exhibit problematic behavior especially for sparsereward and longhorizon tasks as shooting methods do not allow for planning trajectories which slightly violate the learned dynamics the authors propose a collocation method based on levenbergmarquard optimization with a scheduled lagrange multiplier which outperforms two shooting methods cem and gradientbased on a set of robotic tasks pros the paper is clearly written and experiments demonstrated improved performance over cem and gradient descent optimization of actions weaknesses the experiments are limited to sparsereward tasks it may be interesting to compare the performance of latco and cem on deepmind control suite tasks same as planet also to see how latco performs on densereward tasks it is unclear why collocation should find goals better than cem or gradient descent for sparse rewards if the reward function network learns this sparse reward there is no meaningful gradient towards the goal for an optimization based method cem seems to have a better chance to find the goal due to randomization of actions if not reward shaping has been used why is the learned reward by the planet network useful for collocation conclusions claims that the approach would be removing the need for reward shaping however the task is simplified by the oracle agent for training data collection which uses reward shaping the manual interaction is shifted from reward shaping to training data augmentation please clarify recommendation the main concern about the paper is that optimizationbased collocation might not be appropriate for the sparse reward case for a method that learns to predict reward for states hence experimental results are questionable the rebuttal should carefully address this issue the idea is evaluated in a sufficient range of experiments although further experiments on standardized benchmarks deepmind control suite would significantly improve the paper the points raised in weaknesses above should be addressed questions for rebuttal see weaknesses why not use gradient descent to update the lagrange multipliers what is the role of epsilon in the lagrangian in algorithm 1 l5 how do the terms in the lagrangian relate to the residual terms especially why does the quadratic action objective in the lagrangian relate to the residual max0 at amathrm max in 63 you write to provide a fair comparison that isolates the effects of different planning methods we use the same dynamics model architecture for all agents is it only the same architecture or the same dynamics model at least for the models trained only on the oracle data what is the task in sec 64 to generate the plots in fig 5 why do the returns get negative if the reward is sparse and positive further comments rename lambdat in eq 6 to lambdatmathrmdyn to match l5 of algorithm 1 what is the value of lambdatmathrmact for the reward objective we found it convenient to map the reward to the negative part of the real line with the softplus operation sounds confusing to me i associate negative numbers with the negative part of the real line maybe phrase it like for the reward objective we form residuals by squashing the negated reward through a softplus function algorithm 1 tmathrmrep is not defined algorithm 1 l13 the elbo is maximized gradient ascent with some learning rate theta theta alpha nabla emph seems to give underlined instead of italic characters see the references section this is probably not intended please plot lagrange multiplier values in fig 5 postrebuttal comments the paper should further elaborate on the smooth reward predictions and how online learning in the sparse reward setting can be possible with latco it seems the method requires a specific initializationimplementation of the reward predictor for instance to overestimate rewards so that the method has to explore the areas where reward is overestimated and pull down the predicted reward the paper should explain how this was implemented this kind of exploration would be prone to the curse of dimensionality if the state representation of the environment is highdimensional the authors should discuss this limitation thoroughly this might also explain why the tasks in the experiments are limited to 2dimensional states i wonder about the discretization of the colors in fig 8 higher quantization of color should be provided so gradients of the reward landscape can be assessed the paper still does not detail the update rule for lambdaact overall the author response has addressed some of my technical concerns but the main challenges are only addressed partially the paper is still borderline and might need another thorough round of improvement and resubmission to another venue docsep summary the paper proposes to transpose colloction methods to solve planning problems in a learned latent state space this can then be used as a replacement for shooting methods in modelbased rl particularly suitable for imagebased tasks where planning in the observation space is impractical pros basic shooting methods are a primitive planning technique we should be able to do much better using collocation methods in learned latent state spaces makes sense this paper is one of the first to provide a working realization of this cons the problem is only difficult because of the attempt to learn the task directly from visual inputs from a practical robotics and planning perspective the task problems are very dated eg from 30 years ago in this sense the tasks are straw man problems that are uninspiring shooting methods provide exploration that the gradientdriven collocation methods do not allow for the tradeoffs are not as simple as portrayed recommendations i currently lean marginally in favor of acceptance purely on the grounds that transposing collocation methods to latent spaces does havae future potential however the given examples are uninteresting questions how would the results compare to simply using the latent state to estimate a traditional compact state descriptor and then using that with a classical motion planner for the given example tasks that seems very feasible can planning methods like chomp also be realized in the latent space what are the general constraints or restrictions if any on transposing the many known planning methods into the latent space what is the impact of choosing a time horizon t that is too short or too long what is stochastic about the dynamics if anything for the chosen experimental tasks what is the action space for the given tasks what is amax for the tasks feedback the output is a trajectory not a policy to make it actionable would require using the optimized trajectories to learn a policy or to use mpc this aspect is missing from the paper similarly the exploration issue is avoided cf sec 61 thus overall the paper is not really solving an rl problem the title could more directly address the contribution ie motion planning via latentspace collocation to this collocation methods sic figure 2 the text refers to a decoder but this is missing in the figure the dynamics model is left unlabeled it is worthwhile briefly discussing the broader space of collocation methods and where your method fits within that taxonomy section 5 constrained optimization balance between the strength of the dynamics constraint missing and the objective docsep paper summary this paper introduces a visionbased motion planning approach using collocation many existing approaches to visionbased control rely on computationally expensive planning approaches using shooting to perform modelbased control which is often only useful in simple control tasks collocation approaches are effective in settings with difficult path constraints and thus exploited by this work to dramatically improve modelbased reinforcement learning i like the idea but it is a relatively small extension to existing work so i am inclined to rate this paper as marginally below the acceptance threshold i would be willing to revise my score if the paper was revised to better clarify the algorithm to align with the methods used in experiments better justify the reasons why ilqr trajectory optimisation with locally linear dynamics models was not used as a baseline or even better include this as a baseline pros the paper is well written and clearly laid out solving a collocation problem in the latent space is a sensible approach and a much better idea than using cem planning or shooting cons its a reasonably straightforward application of collocation in a learned latent space while i have not seen this done previously it is a relatively obvious improvement the paper motivates the need for collocation in the context of long horizon tasks where shooting performs poorly however none of the tasks pushing and reaching in free space considered in this work are long horizon tasks or particularly challenging general recommendations for improvement and queries id recommend replacing the term long horizon tasks with something more suitable along the lines of what is actually demonstrated in the experimental results eg visionbased motion planning page 2 latent planning the paper mentions work on structured latent dynamical systems watter et al 15 but disregards these however these approaches relied on locallylinear predictive models which may be difficult to design no design is required for latent dynamical systems with local linear latent dynamics eg watter et al 15 fraccaro et al 17httpsarxivorgpdf171005741pdf all transition matrices and parameters are learned using a slightly different elbo the benefit of this approach is that it allows for standard trajectory optimisation approaches like ilqr to be applied directly i would like to see a comparison against trajectory optimisation using a dynamical system with learned locally linear models which arguably allows for simpler planning and control along the lines above there is a recent body of work looking at imposing more structure in the latent dynamical system to simplify and improve downstream control eg embedding for proportionality jaques et alhttpsarxivorgabs200601959 koopman embeddings for open loop control with qp li et alhttpsopenreviewnetforumidh1ldza4tpr in contrast this work seems to advocate the opposite approach ignoring the latent dynamical system learned and focusing on better methods to solve a more challenging optimisation problem i believe that more discussion on the contrasts between these ideas would be a useful addition to this paper algorithm 1 the algorithm and training approaches lack clarity and cause some confusion which needs to be improved the algorithm seems to indicate that dynamics model learning and planning happen jointly which doesnt really make sense we shouldnt need to relearn a dynamics model at planning time unless the intention was to imply that this is an online learning approach i assume that this is not the case as experimental methods seem to indicate that dynamics and reward models are pretrained separately from trajectory optimisation using collocation please clarify and ensure that the methodology lines up with what was demonstrated in the experiments section docsepsummary the paper studies the problem of planning in domains with sparse rewards where observations are in the form of images it focuses on solving this problem using modelbased rl with emphasis on better trajectory optimization the proposed solution uses latent models to extract latent representations of the planning problem that is optimized using the levenbergmarquardt algorithm over a horizon the experimental results show improvements over a zerothorder cem optimization b planet hafner et al 2019 and c gradientbased method that optimizes the objective in eq 1 strengths i the motivation organization and the overall writing of the paper are clear ii the tested experimental domains are good representatives of the realistic planning setting identified in the paper weaknesses i discussion of literature on planning in latent spaces 12345 is left out and should be included namely 12 performs classical planning from images and 345 perform planning with learned neural models here space can be saved by removing figure 4 since all of its subfigures look identical given their visual quality ii have you tried solving eq 2 directly similar to 4 it seems more appropriate baseline compared to c ie as labeled above iii how do you reason about the length of the horizon t for example 12 use heuristic search iv there does not seem to be any presentation of hyperparameter selectionoptimization runtime results or quality of solutions table 1 is too highlevel to provide any meaningful insight into understanding how each method compares similarly figure 5 is very hard to read and not clear what each axis represents overall i would say this is the weakest part of the paper references 1 classical planning in deep latent space bridging the subsymbolicsymbolic boundary asai and fukunaga aaai18 2 learning neuralsymbolic descriptive planning models via cubespace priors the voyage home to strips asai and muise ijcai20 3 nonlinear hybrid planning with deep net learned transition models and mixedinteger linear programming say et al ijcai17 4 scalable planning with deep neural network learned transition models wu et al jair 5 optimal control via neural networks a convex approach chen et al iclr 2019 post rebuttal to best of my understanding the authors have addressed all my questions and suggestions with the appropriate revision of their paper specifically the necessary discussion of hyperparameter selection is added and presentation of the runtimesolution quality results ie raised in point iv have been improved with the inclusion of important details additional discussion of related work is added ie raised in point i and questions are addressed ie raised in point ii and iii as such i have updated my rating accordingly ### Summary:
this work applies collocation a well known trajectory optimization technique to the problem of planning in learned visual latent spaces evaluations show that collocationbased optimization outperforms shooting via cem planet and shooting via gradient descent pros i agree with the reviewers that this idea makes sense and will very likely be built on in future work the authors have very actively addressed most comments of all reviewers that engaged in discussion cons i agree with the reviewers that this is a very simple and straightforward application of collocation methods to the visual latent space domain furthermore the chosen tasks are fairly simplistic metaworld has a variety of tasks most of which are more complex than the reaching and pushing task that were chosen for this manuscript even with all the updates the evaluation is still very shallow i agree with the reviewers that obtaining results for both settings a visual mpc with pretrained or even ground truth dynamics model b in the modelbased rl setting for which the model is being learned is important while the authors have added some of these experiments a detailed discussion of how the results change from a to b is missing furthermore when using collocation in this mbrl setting how should dynamics constraints be enforced should they even be enforced when the model is still really bad how does the comparison between collocation and shooting fare when you use denseshaped rewards for the sawyer tasks many questions come to mind some of which that have been raised by the reviewers and my main point is that simple idea indepth analysis of some of these questions would have created a stronger contribution alternatively real system experiments would have increased the significance of this work i dont see any direct references of gradientbased visual latentspace planning shooting but related work on this does exist in my opinion a simple straightforward idea is no reason to reject a paper however currently the reader does not learn when collocation should be considered over other trajectory optimization methods when attempting to plan in a learned visual latent space and what some of the main remaining challenges are because of this i lean towards recommending reject and would encourage the authors to deepen their analysis of collocation in visual latent space
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper performs an exhaustive empirical study to propose model patching where the goal is to improve accuracy for openvocabulary models on specific tasks ie patching tasks without degrading accuracy on tasks where performance is already adequate model patching refers to interpolation of model weights between a finetuned model and the original model among other experiments the paper shows results of model patching on nine tasks where zeroshot clip performs poorly and obtains improvement of over 1560 percentage points while not losing performance on imagenet itself the paper also talks about broad transfer across multiple tasks and notes that the proposed approach becomes more effective with increasing scale of datasets strengths the methodology tested and proposed is very simple and easytoimplement the results are interesting and of broad relevance to the community especially to those in largescale ml practice the results are comprehensive and cover experiments across various settings including ones such as typographic attacks counting and visual questionanswering the appendix is loaded with even more results which make this an elaborate empirical effort weaknesses the primary weakness of the paper is the limitation mentioned in the paper itself l315316 our method provides no guarantees on which data the model performance might change this limits the robustness of the takeaways from the paper and where one may be able to use them especially considering the contributions are largely empirical considering alpha is the key hyperparameter for the interpolation it would have been nice to see how the performance changes with different alphas this to me is another weakness of the paper amid the large number of results it lacks a clearer perspective on how a reader can take away lessons that can be used in practice especially considering practice is the focus of this work in continuation to the above comment there is no evident trend in the results or a summary of it in the paper to see when this method works best considering the large number of results presented in the work including the appendix the paper definitely needs a summary or discussion summarizing the lessons and takeaways for the work to be useful to the readers it becomes the readers burden otherwise to sift and find the takeaways the paper doesnt compare with other papers that linearly interpolate neural network weights as mentioned in the baselines section please see the weaknesses listed above docsepin this submission the authors attempt to improve the clip model on the tasks that it performs poorly they propose to finetune pretrained clip model on the target task with frozen classification layer and derive the final model with a linear interpolation between pretrained and finetuned model the mixing coefficient is decided by validation this taskpatching procedure could be adopted for patching multiple tasks in the experiments the authors show that the proposed method retains good performance on the task where the pretrained model is already good at while improves the performance on the target tasks in addition to classification tasks the authors also show that their patching approach could improve clip model on 1 typographic attacks 2 object counting and 3 visual question answering strengths improving clip model is an active research problem the authors propose a method to improve the model performance on wider range of tasks for openvocabulary classification the proposed approach is simple yet effective to improve clip model on poorly performed task the authors propose to freeze the classifier weights derived fro text encoder and finetune the model on given tasks this preserve the ability of openvocabulary classification while improve model performance on new tasks the experiments show decent improvement 1 20 over the tasks for which the model is finetuned while preserving the performance for the task that clip model is already good at in addition to classification tasks the authors show that their patching approach can improve object counting and vqa comparing to the pretrained clip model weaknesses the writing could improve the design of proposed method to retain openvocabulary ability is based on freezing classification layer derived from text encoder this only mentioned in one sentence in experimental setup it was pretty confusing to me how the proposed method retain open vocabulary ability until i locate that single statement in experimental setup i would recommend the authors to make it more clear through out the paper the proposed method is simple however in practice it requires nontrivial effort to patch the model the proposed method requires supervised annotations and finetuning model with hyperparameter search and validating the mixing coefficients the procedure may not be simpler than adding downstream data to the pretraining dataset and then retrain imagetext model the authors only compare the performance against pretrained clip model i am curious how the proposed method compare with a simple baseline adding downstream data to the clip training dataset and retrain imagetext model the authors discussed the limitations in the submission docsepthis paper proposes a simple yet effective way to do model patching by interpolating the weights before and after finetuning experiments are performed with the image encoder from a recent dualstream visionlanguage model clip empirical results show that the proposed method improves performance on other tasks while preserving the performance on imagenet moreover the proposed method can make clip more robust and more powerful for tasks like vqa the paper is overall well structured and easy to follow the method is simple and effective one of my main concerns is about the motivation of patching models on a single new task more discussions on the real applications of this scenario would be necessary to better show the motivation of this research yet another concern is that this work keeps saying the openvocabulary model however it is not clear why the proposed method only works for openvocabulary models or whether can be extended to other openvocabulary models beyond clip the experiments also need some more clarifications though the authors describe some connections and differences with some continual learning methods in the related work section it would better show the efficacy of the proposed method by empirically comparing with them in particular in lines 264266 the authors said that in contrast to regularization or replaybased methods patching requires no extra computational cost during training since the proposed method also requires training ie finetuning on the data of the target task it is not clear to me what the training means here can the authors provide an exact comparison of the performance vs training timedata between the proposed method and regularization methods like ewc and ewc yes there is slight discussion about the limitation at the end of the paper ### Summary:
the reviewers had some concerns about clarity of motivation and baselines my own opinion is that this work is valuable for the community because of the simplicity of the method and depth of experiments
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a transformerbased method for vehicle trajectory forecasting it proposes to combine the tasks of global localization and local refinement for more accurate trajectory forecasting on the structure side the authors design a mechanism of motion query pair to model motion prediction as the joint optimization of the two tasks moreover the interaction among agents is considered in the proposed method and collaborates to make the dense future prediction the proposed method is demonstrated to be efficient on the largescale waymo open dataset and an endtoend variant of the proposed method is also provided for a broader study strengths 1 the proposed method is well elaborated and the implementation details are necessarily provided to help understand the model design 2 the performance of the proposed mtr on the waymo open motion dataset is good advancing the sota under the setting further 3 the paper is mostly well written that i can understand the motivation highlevel intuition and model design quickly weaknesses 1 some details are not clear for example the implementation of mtre2e is hard to follow for me given the limited illustration at l273 2 though it may not be necessary it would be helpful to ablate the choice of query pair number for endtoend version as well as there is a claim that since 6 intention points are too sparse to well cover all potential future motions may need some experiment backup 3 given the proposed method stresses on the design of query pair it would be helpful for us to better understand the efficiency of this by making an ablation study of using only one type of query or both some technical design of the proposed method is explained at the end of the draft i dont recognize more potential negative societal impact or limitations of this paper docsepthis paper proposes an decoder method for motion prediction task which refines different modes with the static prior and dynamic attention its performance on the waymo open motion dataset is impressive which demonstrates the effectiveness of iteratively refine the prediction similar to detrdabdetr strengths the idea of iteratively refining the prediction by transformer is novel in the motion prediction area the performance on waymo motion dataset is great the ablation experiments is wellorganized and convincing weakness the proposed method is only evaluated on one dataset it would be more convincing if the experiments could be done on other large scale datasets however considering the waymo open motion is fairly large i think it is okay if there is no enough timeresource to try other datasets some parts of the proposed methods is not clearly described in the manuscript which is understandable considering the space limit the proposed method has similarities with the dabdetr in the objection detection area i think it would better position this work if the authors could discuss about the proposed methods relation with recent detr related works the limitation part is ok docsepthe work addresses the motion forecasting problem for autonomous driving the authors introduce a transformerbased framework mtr that works in the following ways as highlighted in the contribution section of the paper separates the modeling of the global intention from local movement refinement in the trajectories in the transformer framework the predictor is inspired by detr interaction modeling between agents via an auxiliary dense prediction task essentially letting the model to predict the future directly i didnt fully understand that part see below sota results on the waymo dataset ranked 1 among results without using ensembles since i dont understand the 2nd contribution i cannot recommend accept at this point looking forward to understanding it postrebuttal post rebuttal update i understand the contribution better now in line with other reviews feedback ill change the rating from 4 7 and recommend an accept strengths strong results significant improvement over the sota i checked the leaderboard on waymo open dataset and verified the claims somewhat novelty using detr for motion forecasting is not that new anymore i reviewed several papers for cvpr22 and eccv22 that contained similar ideas but those should be considered concurrent work additionally the hierarchical separation of intent vs fine control adds an additional hint of novelty though the hierarchical separation by itself is not novel either 9 51 good ablation the paper contains ablation for all the contributions and novelties in their methods this is great weaknesses unclear writing abused notations missing descriptions of variables figures that are not selfcontained see below under questions insufficient experiments minor the only results are on the waymo dataset there are other popular datasets such as argoverse and nuscenes having strong results on a secondary dataset would make the representation a lot stronger yes but it didnt hit the mark the limitation section is about highlevel limitation like the method is not great for long tail behaviors not a wish list of future works docsepin this paper a motion transformer mtr framework is proposed for the motion prediction task including marginal and joint motion prediction specifically motion query pairs are designed for global intention localization and local movement refinement which takes advantage of both goalbased methods and the regression methods experiments on waymo open dataset indicate the effectiveness of the proposed method strengths strong motivation and wellorganized promising results weaknesses see questions for details some insightsdetails are not clear lack of some experiments there is a limitation section in the main body of the paper ### Summary:
this paper proposes to model traffic vehicles using a transformerbased architecture for iteratively refining multimodal trajectory predictions while the method is related to and builds upon several similar works in the area it does also introduce some interesting new components such as the iterative refinement and the dynamic attention further the strength of the experimental results from the combined system alone makes this paper important for researchers working in these areas the method achieves the state of the art for trajectory prediction on two very widely used datasets waymo and argoverse compared to published leaderboards all four reviewers unanimously agree that this paper is above the bar for acceptance and i concur
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper investigates particlebased state estimation under the presence of unknown observation and transition models this is challenging for a number of reasons in particular due to the nondifferentiable resampling step in particlebased methods prior work has proposed leveraging fishers identity to derive a maximum likelihood objective for the model parameters that bypasses the resampling step however such existing approaches are computationally expensive for large models this paper builds on this prior work and introduces a particle approximation that trades off bias for computations efficiency the authors compare this approach with existing baselines and demonstrate superior performance on a realworld and synthetic av dataset strengths this paper studies a wellmotivated problem in real world scenarios the observation transition models may be unknown necessitating methods that can estimate the parameters of these models the introduction and methods section are wellwritten the paper does a good job detailing the shortcomings of prior work providing background information for the method and laying out the proposed approach in a stepbystep basis the experimental results are convincing demonstrating superior performance compared to a number of baselines across a number of quantitative metrics weaknesses the paper highlights that this approach is useful in multiagent settings yet there is nothing about the method specifically tailored for multiagent problems while the experiments focus on a multiagent setting this method seems designed for any general state estimation problem this makes the message of the paper somewhat confusing even after reviewing the appendix the paper provides sparse details about the datasets ideally the authors can help to answer the following questions who collected the real world dataset what is the nature of the agents in the dataset in which specific ways are the real and synthetic datasets different the paper has a limited number of figures and visualizations it would help to provide visualizations of the datasets and qualitative analysis of the results some of these are available in the appendix perhaps they can be brought up to the main section of the paper the limitations that the authors describe in section 6 namely scaling this method to more complex empirical domains with occlusions and highdimensional observations while these are not strictly necessary components they can help to strengthen the empirical findings docsepthis paper proposes a method for learning observation and transition models for birdseye view multicar autonomous driving scenarios using a fixedlag approximation of the score function along with a deterministic motion model inference of highdimensional models can be achieved strengths the paper is wellwritten and the method is generally welljustified the method leverages specific assumptions from the birds eye view autonomous driving scenario to learn observation and transition models weaknessesquestions more evaluations on the fixedlag size would be beneficial at what point is path degeneracy an issue at what point is a fixedlag uninformative since this most likely depends on the number of particles as well it would be nice to see a comparison with the fixedlag window size and number of particles being modified simultaneously why does the real data show less performance difference when modifying fixedlag why did real data cause large gradients across all methods if performance is independent of fixedlag size what is the method gaining plot shows minimum of l5 for realdata but what happens for l15 if this small of window size gives similar results for realdata then learning the observation and transition models may not require longer horizon inference if dpfsgr performs better in terms of accuracy for 25 steps what is preventing methods from also limiting the number of steps that is used for training since other methods may or may not differentiate through the marginal loglikelihood does it make sense to even compare the baselines using this relative bearing is usually not available as a measurement and estimates would be somewhat noisy is this a realistic setup other comments figure 1 is somewhat wasteful in terms of white space and also does not illustrate much consider improving the figure as this is the only one in the paper one of the figures in the supplementary material may better illustrate the method and application than the current figure 1 how sensitive is the method to the agents state noise in reality this will be imperfect is assuming the motion model to be deterministic adequate in realworld scenarios this may not be the case docsepthe paper proposes a new approach for particle filtering approach for estimating the score function of statespace models ssm the authors do so using the fisher identity to circumvent the nondifferentiable sampling step in particle filtering for estimating the score function moreover they circumvent the potential issue of path degeneracy where the particles converge to a single one by using a fixed lag l up to which the estimates are calculated based on the assumption that observations after a time tl are not very useful for estimates at the current time step they also derive the use of a motion model for policybased approximations by showing that the gradient of the policy corresponds with that of the ssm allowing it to be plugged into the gradient of the score function their results for approximating the states of external objects from birds eye views of a vehicle show good loglikelihoods and state estimates on both real and simulated datasets strengths the proposal for using the fisher identity for approximating ssms enables low variance state estimates and also circumvents the nondifferentiable nature of the sampling process the computational capacity is drastically reduced by considering a fixed time window up to which the estimates are calculated rather than going through the whole trajectory the paper is very well written it is very easy to follow and the relevant works preliminaries and approaches are explained in simple understandable terms and the paper has a good flow from start to end weaknesses the paper would benefit from providing some more extensive information about the real dataset that is used such as how the data was collected sensors used example data etc it would have been better to showcase the results on some existing datasets like kitti for example or even on a subset of it this would allow the results to be more comprehensible the paper is currently missing an ablation study to show the extent to which the performance is affected by the different components these could be different lag lengths in sec 42 docsepthe model proposes training nonlinear nongaussian ssms using gradients approximated with fishers identity in order to efficiently compute the required smoothing distributions it proposes using fixedlag smoothing the resulting method is evaluated on a state estimation task in an autonomous vehicle setting where 2d poses of surrounding vehicles need to be estimated from observations the paper is well written and easy to follow and the main idea intuitions and mathematical details are clear using fixedlag smoothing to approximate the required smoothing estimates efficiently is a simple idea yet as this seems to be the papers main algorithmic contribution the effects and limitations of this approximation would need further investigation while i agree that the assumption seems reasonable for many systems ablations for different smoothing lengths l would be good to see in general i do not believe the conducted experiment is sufficient to allow assessment of the methods full potential as it relies on unrealistic assumptions lines 218220 largely preengineered transition and observation models and only considers a narrow scope of applications ### Summary:
the authors propose a multiobject state space estimation approach based on particle filters where the gradients are computed through the fischers identity strength clear story and well motivated to avoid biased or high variance gradient approximation wellstructured paper comparison to two baseline methods evaluation on a realworld dataset and two synthetic datasets detailed discussion of all model assumptions and limitations weakness the realworld av tracking task is not described in sufficient detail therefore it is difficult to assess how relevant the approach is and how well it could work on more complex realworld tasks in general a discussion on the applicability of the model to other problems not only multiobject tracking would be important update the modified figure 1 and the additional explanations at multiple places improve the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: hyperbo assumes the tasks are independent given the hyperparameters unlike typical metalearning approaches which assume tasks are related this allows for an efficient kronecker decomposition of the kernel and thus linear rather than cubic scaling across tasks using this model hyperbo performs bo as usual maximize the acquisition function to obtain the next point to evaluate hyperbo also makes the critical assumption of an offline pretraining of hyperparameters on a representative set of completed tasks during optimization itself the hyperparameters are fixed i have a few key concerns about this paper why fixed hyperparameters this is clearly the bottleneck of metalearned bo and if these hyperparameters are learned offline this seems to a somewhat eliminate the strength of hyperbo which is the linear scaling per task obviously this still helps significantly during the offline training but still a point of concern of mine and b seems not robust especially if the set of representative completed tasks is heavily biased hyperbo in the experiments uses the pi acquisition function is there a particular reason why this is pi is quite greedy even more than ei so is there any intuition as to why pi is appropriate in this situation in figure 2b i am somewhat concerned about the empirical performance of hyperbo though it beats the baselines it does so in a 4d search space using thousands of tasks this seems like overkill the error bars are also all over the place this is somewhat unfair of me to ask for i admit but i am curious if a much simpler approach involving restricting the search space given that it is fixed will help see the paper learning search spaces for bayesian optimization perrone et al 2019 i feel like there is definitely enough data for this to make a difference also the experiments only really concern one optimization problem involving optimizer hyperparameters though this one experiment is quite impressive in terms of the data involved iit would be nice to see another experiment say for tasks that might be easier like tuning a random forest i have some concerns about the assumptions used in the methodology as well as the experiments which leave a number of open questions in particular the fixing of gp hypers seems to largely remove the need for scaling which is the primary strength of hyperbo furthermore though the experimental set up uses a large amount of data to achieve somewhat unconvincing results in my mind and only one optimization problem is presented though worth noting is thoroughly analyzed thus i cant recommend acceptance at the time docsepthis paper presents a bayesian optimization method based on metabo the motivation is tasks can share the same parameter structure and this shared information eg correlation between tasks can be transferred to new and similar tasks an example is to optimize the the hyperparameters of a same optimizer across different architectures and different datasets this problem is a very important one in the community of bayesian optimization and a reasonable method can lead to a potentially dramatic decrease in the required computation especially when the objective function is very expensive this work tries to overcome limitations of existing methods for example the method proposed in this work does not need to evaluate all objective functions associated with all tasks on the same parameters the reviewer appreciates the authors putting effort into the empirical evaluation of the proposed method however the proposed approach is not interesting to the bayesian optimization community and is trivial to some degree the reviewer believes that the targeting problem presented in this work is a very important one and an effective method could be of great practical value in the abstract authors claim that data from similar functions could lead to a better prior for gp obviously a better prior for gp is desirable and that is why the marginal likelihood is used to optimize parameters of a gp from such a claim it is expected that an efficient method for bo will be presented by exploring novel similarities between tasks however throughout this paper there is no definition of a similarity between tasks and tasks are treated as independent this raises my concern on this works novelty which is my biggest concern on this paper authors claim that the critical difference between this work and standard bo algorithms is the initial learning process in line 2 of algorithm 1 the corresponding likelihood of this approach is given in eq2 i do not get the point how this approach is different from existing gp modeling and eq2 is simply the unnormalized marginal likelihood for all data points since all tasks are assumed to be independent such a formulation is not only trivial to the gp community but also to the empirical bayes community additional minor issues 1 the graphical model for gp in figure 1 is wrong 2 there exist a lot of inconsistencies in this paper in assumptions section it is assumed the variance is known however the variance is a hyperparameter in the marginal likelihood 3 lots of claims and statements are superfluous for example authors claim one limitation of existing approaches is the total number of bo iterations must be set in a manual way however throughout this paper the number of iterations is still predefined what is the point of saying this is a limitation while not touching it at all another example authors claim interpretability of intermediate steps is lost in existing methods however this problem is not touched either 4 another contribution of this paper is a tuning dataset i can see the value of such a dataset however failing to explicitly describe the required computation resources makes claiming this being a contribution less convincing the proposed method is trivial the theoretical part presented in this paper is very minimal and incremental docsepthis paper suggests a meta bayesian optimization strategy that optimizes free parameters of gp including a prior function and noise variance where multiple sets of historical observations are given in particular the proposed method chooses a free parameters using one of three approaches i optimizing a marginal likelihood ii measuring kl divergence iii considering both marginal likelihood and kl divergence the authors finally show the theoretical analyses on regret bounds and the numerical results on hyperparameter optimization reasons to accept it is wellwritten and wellorganized it solves a very interesting problem which transfers a history to the current task in bayesian optimization setup compared the work by wang et al 2018b it solves more realistic setups it provides promising numerical results and sound theoretical results reasons to reject i do not think that it degrades the contributions much but fourdimensional search space is relatively small compared to other bayesian optimization or hyperparameter optimization papers following the above point is there any specific reason why the authors use fourdimensional search space i do not think this algorithm is not scalable moreover for example batch size can be one of the metaparameters to be optimized questions to authors 1 can you elaborate why the proposed method does not train a gp model every iteration eg every t 1 ldots t i think that it can be possible without relatively expensive computational costs 1 h nll does not use a matching dataset right if you did not use multitask gp regression which has an additional input to indicate task information does h nll ie optimizing equation 2 with dn work appropriately i think that this paper addresses an interesting problem and suggests a novel method as described above thus i would like to recommend acceptance docsepthis paper is concerned with speeding up bayesian optimization by using evaluation data from previous related tasks defined over the same configuration space the authors propose to model the data from each experiment or task by independent gaussian processes which all share the same mean and covariance function this surrogate model can be learned from past data the paper also presents experiments on a fairly simple search space of 4 optimizer parameters this is done for a bunch of datasets and nn models and there is a pretty simple extension of theoretical results from wang 2018b the problem of warmstarting hpo by making use of data from previous experiments is an obvious idea and it has seen a large amount of past work much of which the authors of this submission do not seem to be aware of neither apparently was wang 2018b which seems more of a theoretical paper in particular there is quite a lot of work which uses gp models and scales linearly in terms of the number of past experiments contrary to what is stated in the introduction two of the most interesting ones are maybe 1 2 the authors here cite perrone 2018 which has these citations and more so it is pretty odd the authors do not mention or compare against any of them given the straightforward nature of what is proposed here a setup closely related to what is done in 3 id be quite surprised if for example 1 would not outperform it after all the assumption that data from experiments on quite different models can be modeled by the same mean and covariance function is pretty strong there are all sorts of issues with this idea for example what if data from some tasks is much larger than data from others moreover in what is proposed here the surrogate model parameters do not even seem to be adapted to the current task even as data from it becomes available here methods like 1 2 seem much more compelling to me as they try to for example rank previous experiments by closeness to the current one 1 is doing this without having to define any metafeatures of the dataset and also of course without relying on observations at the same configurations given you model your data with a gp you should certainly not need that anyway the experiments are not meaningful because essentially all relevant prior work is missing for comparison the authors more or less compare their proposal in two variants against a bunch of baselines as if there was no revelant prior work in fact they even seem to invent on their own methods to compare against such as mimo in a way which has never been used for transfer hpo why please read about and compare against relevant prior work given they cite work eg perrone 2018 they should have been aware apart from that i also do not get much out of the experimental setup why was it chosen that way does it have any practical relevance does anybody else use this learning rate schedule or was it just made up for this paper i also did not find a discussion of a pretty critical point how are the datapoints chosen for tasks you offline train on in order to be realistic these would have to be active choices themselves because that is data we could have been obtained by running bo on them instead my suspicion is that past data was sampled randomly which would correspond to pure exploration random search such data is obviously more valuable to obtain a good surrogate model fit but also more expensive to obtain in the real world one would have to run random search 1 feurer etal practical transfer learning for bo httpsarxivorgabs180202219 2 wistuba etal twostage transfer ecml 2016 3 golovin etal google vizier kdd 2017 this paper proposes a simple idea for wamstarting bo by fitting the parameters of a gp surrogate model on past data unfortunately a lot of relevant prior work is ignored here and not compared against instead the proposed approach is compared against simple baselines as well as methods that mostly seem to have been made up such as mimo ### Summary:
this paper claims a practical improvement over one of earlier meta bo methods warmstarting bo or hpo by making use of data from past experiments or tasks seems to be interesting and useful for some applications in fact there are a large amount of work on this topic but a lot of relevant prior work is ignored in this paper unfortunately i appreciate the authors for making efforts in responding to reviewers comments however after the discussion period most of reviewers had serious concerns in this work pointing out that the proposed method is rather trivial and the comparison is made only against a simple baseline it was also suggested to improve the experiments while the idea is interesting the paper is not ready for publication at the current stage
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: overall this paper proposed soft decoupled encoding sde a special multilingual lexicon encoding framework which can share lexicallevel information without requiring heuristic preprocessing experiments for lowresource languages show consistent improvements over strong multilingual nmt baselines general comments to me this paper is very interesting and is nicely summarized and combined previous efforts in two separated directions for sharing multilingual lexicons based on the surface similarity how the word is spelled eg subwordcharlevel models and based on latent semantic similarity eg gu etal 2018 however in terms of the proposed architecture it seems to lack some novelty also more experiments are essential for justification i have some questions 1 one of the motivation proposed by gu etal 2018 is that spelling based sharing sometimes is difficultimpossible to get eg distinct languages such as french and korean but monolingual data is relatively easy to obtain some languages such as chinese is not even spelling based will distinct languages still fit in the proposed sde in my point of view it will break the query vector to attention to the semantic embeddings 2 how to decide the number of core semantic concepts s in the latent semantic embeddings is this matrix jointly trained in multilingual setting 3 is the latent semantic embeddings really storing concepts for all the languages say would you pick words in different languages with similar meanings will the they naturally get similar attention weights in other words do multiple languages including very low resource languages learn to naturally align together to the semantic embeddings during multilingual training i am a bit doubtful especially for the low resource languages 4 it seems that the language specific transformation does not always help is it because there is not enough data to learn this matrix well 5 during multilingual training how you balance the number of examples for low and high resource languages docsepthis paper focuses on the problem of word representations in multilingual nmt system the idea of multilingual nmt is to share data among multiple language pairs crucially this requires some way to tie the parameters of words from different languages and one popular method is to share subword units among languages the problem is that subword units in different languages may not be semantically equivalent and many semanticallyequivalent concepts are not represented by the same subwords this paper proposes an alternative way to share word representation in particular by proposing a common set of semantic concept vectors across languages which are then folded into the word representations via attention the problem is wellmotivated and the proposed solution is reasonable previous works such as gu et al 2018 have been motivated in a similar fashion and the proposed solution seems to outperform it on the ted dataset of qi et al 2018 the experiments are informative the main open questions i have are a varying the latent embedding size it seems like only 10000 is tried since this is the main contribution of the work it will be desirable to see results for different sizes is the method sensitive to this hyperparameter also suggestions on how to pick the right number based on vocabulary size sentence size or other languagecorpus characteristics will be helpful b what do the latent embeddings look like intuitively will they be very different from those from gu et al 2018 because you are using words rather than subwords as the lexical unit c the explanation for why your model outperforms gu et al 2018 seems insufficient it would be helpful to provide more empirical evidence in the ablation studies in order really understand why your method which is similar to some extent is so much better the paper is generally clear here are few suggestions for improvement table 1 please explain lex unit embedding encoding in detail for example it is not clear what is jointlookup vs pretrainlookup it can be inferred if one knows the previous works but to be selfcontained i would recommend moving this table and section to related works and explaining the differences more exactly sec 42 explain the motivation for examining the three different lexical units table 3 model lookup ours was confusing do you mean our implementation of neubig hu 2018 or ourssde i think the former are the word representions in eq 4 defined for each word type or word token in other words for the same word puppy in two different sentences in the training data do they have the same attention and thus the same esdew you do not have different attentions depending on the sentence correct i think so but please clarify actually figure 2 has a lstm which implies a sentential context so this was what caused the potential confusion there are some inconsistencies in the terms eg latent semantic embedding vs latent word embedding lexical embedding vs character embedding this makes it a bit harder to line up sec 44 results with sec 32 methods minor spelling mistakes eg dependant dependent please doublecheck for others docsepthis paper presents an approach to creating word representations that operate at both the subword level and generalise across languages the paper presents soft decoupled encoding as a method to learn word representations from weighted bags of charactern grams a language specific transformation layer and a latent semantic embedding layer the experiments are conducted over lowresource languages from the multilingual ted corpus the experiments show consistent improvements compared to existing approaches to training translation models with subword representations the ablation studies in section 44 are informative about the relative importance of different parts of the proposed model can you comment on how your model is related to the characterlevel cnn of lee et al tacl 2017 in the experiments do you cotrain the lrls with the hrls this wasnt completely clear to me from the paper in section 42 you use phrases like concatenated bilingual data but i couldnt find an explicit statement that you were cotraining on both language pairs what does it mean for the latent embedding to have a size of 10000 does that mean that ws is a 10000 x d matrix is eq 4 actually a residual connection as per he et al cvpr 2016 it looks more like a skip connection to me why do you not present results for all languages in section 46 what is the total number of parameters in the sde section of the encoder the paper states that you encode 15 character ngrams and presumably the larger the value of n the sparser the data and the larger the number of parameters that you need to estimate for which other tasks do you think this model would be useful ### Summary:
although some may find the proposed approach as incremental over eg gu et al 2018 and kiela et al 2018 i believe the authors clear motivation formulation experimentation and analysis are solid enough to warrant the presentation at the conference the relative simplicity and successful empirical result show that the proposed approach could be one of the standard toolkits in deep learning for multilingual processing j gu h hassan j devlin vok li universal neural machine translation for extremely low resource languages naacl 2018 d kiela c wang k cho contextattentive embeddings for improved sentence representations emnlp 2018
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a new loss function for survival analysis based on proper scoring functions to less then penalty wrong predictions that are confident make under the logloss the paper is interesting however the benefit over the traditional maximum likelihood estimator is small and the writing needs a bunch of work i would also like to see an eval on data with far less censoring a couple of comments 1 ehrs have only been generally adopted in the last couple of years only a couple of places have more 2 binary classifier citation on page 1 avati rajkomar should also cite the plethora of recent machine learning for healthcare results in this field 3 likelihoods are calibrated as is any error measured by a proper scoring loss 4 there are other methods to fit survival functions such as adversarial timetoevent modeling by chapfuwa in icml 2018 there are probably also moment methods 5 i think the evaluation might also want utilty because sharpness is a utility claim 6 some of the statements in the writing are funny like probability distributions are uniquely identified by parameters im not sure this is true with neural nets with symmetries the paper doesnt need such claims 7 instead of lognormals i would like to see something nonparametric like the categoricals  used for maximum likelihood estimation without latents in the limiting model in deep survival analysis missingness and nonparametrics by miscouridou at mlhc 2018 docsep my main concern is that the authors fail to compare their appproach to any of the modelling approaches discussed in the related works section in particular as mentioned by the authors the wttternn has a similar architecture and thus would have been a crucial baseline for comparisons furthermore i would have liked to see an evaluation on more datasets especially since the data in appendix h indicate that the proposed approach is only marginally better than mlebased model fitting finally in addition to the metrics presented conventional metrics such as the cstatistic would have been interesting i further miss a discussion of alternative approaches to achieve well calibrated scores especially posthoc calibration using the validation set as discussed in guo et al icml 2017 related work is incomplete for example the use of tensortrains in rnns to model ehr data yang et al would the proposed approach not benefit for the use of such tensorization to better model the highdimensional sparse ehr data references guao et al on calibration of modern neural networks icml 2017 yang et al modeling progression free survival in breast cancer with tensorized recurrent neural networks and accelerated failure time models machine learning for healthcare conference 2017docsepthe authors introduce an extension of continuous ranked probability scores crps to the timetoevent setting termed survivalcrps for both right censored and intervalcensored event data further the authors introduce a scale agnostic survivalauprc evaluation metric that is analogous to the precisionrecall curve used in classification and information retrieval systemsmodels the claim that that the proposed approach constitutes the first time a scoring rule other than maximum likelihood seems too strong unnecessary and irrelevant to the value of the presented work it is not clear how did the authors handle the irregularity in time of ehr encounters in the context of an rnn specification also if the rnn specification considered is similar to martinsson 2016 why this wasnt considered as a competing model in the experiments in table 1 it is not clear what the error bars are also they seem too small the proposed approach addresses important questions in timetoevent modeling namely calibration and interval censoring although the connection with crps is interesting first of the two equations in page 3 it is quite similar to an accelerated failure time formulation which for a lognormal specification is standard and popular due to similar reasons to those highlighted by the authors but not mentioned in the related work the interval censoring is also interesting though straightforward and perhaps not as relevant in more general timetoevent settings where events other than age are considered the survivalauprc is not sufficiently motivated without motivation or an intuition of why it should be usedpreferred it seems disconnected from the rest of the paper and its contributions without a more comprehensive evaluation that includes additional datasets and competing models described in the related work section it is difficult to assess the value of the proposed approachdocsepthe paper proposes the use of survival continuous ranked probability score instead of maximum likelihood estimation for personalised probabilistic forecasts of timetoevent data thus estimating a distribution over future time the authors describe the evaluation their method using 1 proper scoring rule objectives 2 evaluation of calibration using sharpness as a metric 3 the survival precision recall curve the authors then apply these techniques to predicting timetomortality using an rnn that takes ehr patient records to predict the probability of death at a given time point its not clear how this is related to the survival crps model or how this model is incorporated into the rnn overall this is an important framework for estimating personalised predictions of survival events for patients with intervalcensored data the authors present a well thoughtout paper with clearly and realistically articulated modelling assumptions the authors also give an excellent critique of the underlying assumptions of current stateoftheart survival methods the authors are also to be commended for the mathematical elegance although the paper is very well written and extremely well structured i struggled with the lack of experiments available in the paper the text embedded in figure 3 is too small the results section is somewhat sparse although the mathematical formulation is wellmotivated and structured its not clear what the contribution of this work is the difference between crpsintvl and mleintvl is incremental and its unclear what the significant benefits are of crps vs mle what would the interpretation of these differences in a realworld setting ### Summary:
all reviewers agree to reject while there were many positive points to this work reviewers believed that it was not yet ready for acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this work the authors extend the dabs benchmark 1 to include five more datasets the new datasets are carefully chosen to cover a variety of underserved domains i bacterial genomics ii semiconductor wafer manufacturing iii particle physics iv protein biology v satellite imagery the authors provide evaluation of two new variations of masked autoencoders and contrastive learning under varying levels of corruption masking in the data for the five new datasets and the seven datasets from the original dabs benchmark generalized mae does well on the satellite image dataset but it is not consistently the case across domains furthermore the optimal amount of corruption is not just domain dependent and can depend on other factors making it hard to choose 1 tamkin alex et al dabs a domainagnostic benchmark for selfsupervised learning neurips 2021 datasets and benchmarks track the authors consider an interesting set of datasets from a wide range of disciplines given the wide range of datasets covered the updated dabs benchmark should be of interest to the broad community to the best of my understanding the authors have properly answered questions regarding the datasets containing offensive content i have several concerns regarding the paper significance of the contribution 1 out of the five datasets that the authors use three of them higgs genomics eurostat are directly available in tensorflow already this implies the datasets are already in a usable format and i do not see what is the authors contribution in terms of these datasets the introduction and description in the main text feels misleading 2 the two algorithms that the authors use are a minor variation of maes and contrastive learning therefore i do not think there is any insightful algorithmic variation that the authors introduce either i understand that this is the benchmarks track and we do not expect completely new methods despite that i think what is introduced should not be stated as a new universal ssl method but should be honestly acknowledged as natural extensionsvariations of well known approaches 3 thirdly the authors show a table of results i did not gather any new insights from the results and that was not very pleasing 4 finally there is a bunch of experiments that the authors state ran into an issue and have been marked as pending for some other results are marked om i understand that issues can happen and do not want to penalize authors for this at the same time it seems unfair to others by accepting this we are allowing authors extra time to gather results that others did not have docsepfive new realworld datasets in science and engineering are added to the original 7 datasets in dabs 10 also two additional unsupervised learning methods are introduced capri and a generalized masked encoding both these algorithms and the shed algorithm from dabs 10 are then applied to all the 12 datasets and the variation of influence of corruption fraction on performance across the 12 domains and the three algorithms is observed the process and results convey the potential of using dabs 20 for assessing robustness of domainagnostic ssl methods and also for studying the effect of design decisions across domains supports generalization of ssl algorithms to lessstudied and diverse domains and modalities enables assessment of how certain design choices for one domain affects the performance on another domain in this work the crossdomain difference in the influence of masking permuting of embeddings becomes evident tasks associated with the datasets introduced are realworld tasks one of the datasets bacterial genomics includes indistribution and outofdistribution data enabling an assessment of robustness of models to distributional shifts each newly introduced dataset is of a different size which can help get an idea of how a domainagnostic algorithm is able to scale the baseline results indicate that the same ssl algorithm and settings do not perform well in all domains this is the challenge for ssl algorithms to generalize to multiple domains without needing manual domainspecific tuning the fact that the results indicate this challenge supports the purpose of this dataset easy execution of pretraining and transfer learning with the help of newly added code is mentioned but the code dataset are not made available for review claims surrounding universal ssl may be a bit too broad since graphs point clouds etc are not included docsepthe paper extends the dabs benchmark with datasets from five new domains proposes a new universal ssl algorithm extending masked autoencoding the paper also investigates an interesting metric the corruption rate across the different domains extension of a wellestablished benchmark significant contribution with 5 new datasets domains new domains are novel in terms of being less studied than extensively studied such as images text speech etc interesting investigation of corruption rate and how it differs based on the domain at hand discussion of external and internal validity the scope of the paper is relatively wide with the multiple domains and the two new algorithms this also makes it difficult to supply a datasheet for all newly introduced datasets elaborate on the datasets in terms of split distribution etc additionally it makes it challenging to cover the details of the benchmarking with a good level of detail docsepthe authors contribute dabs 20 an extension to dabs dataset for universal selfsupervision now with additional five new domains the authors also propose two new algorithms and evaluate them over all the 12 domains with different corruption fractions the paper takes universal ssl one step further in both dimensions datasets and algorithms this is a good contribution for the community the 5 new domain datasets are legitimate contributions however in my view mae does not account as a novel contribution to this research a lot of implementation details in section 3 could have been provided to make it a thorough read docsepthe paper proposed a benchmark dabs 20 which contains extended realworld science and engineering dataset domains and a universal ssl algorithm 1 a universal ssl algorithm is proposed along with the benchmark and works as a strong baseline 2 the insight of evaluating and employing the universal selfsupervised on realworld science and engineering datasets would be helpful for both ai for good and ai for science 3 comprehensive discussion including both internal validity and external validity 1 the results part mainly explored the masked ratio experiments on the proposed benchmark compared with two baseline methods more details on the experiments models hyperparameters are expected to be provided 2 the documentation of the codebase is constrained into readme file without the detailed apis ### Summary:
dabs 20 extension of dabs to include five more datasets and serves as benchmark for selfsupervised representation learning the new datasets cover new domains such as genomics industrial images biology satellite imagery etc in addition two new selfsupervised learning methods are evaluated and their robustness to domains and hyper parameters are evaluated in addition a new technique called capri is introduced which combines benefits of masked autoencoders and contrastive learning to learn representations the reviewers are generally positive except r1 who recommends reject the main concern of r1 is that the contributions over the dabs is not significant eg several datasets are available in tensor flow contrast this with r2 r3 r4 r5 who find the contributions valuable given the importance of unsupervised representation learning and the significant effort being put in the research community this work is valuable thus they recommend accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: interesting work the imbalance problem is essential problem for machine learning and its applications even though many works focus on foregroundbackground class imbalance problem such that polyp detectiontwo classes of polyp and background this work focuses on foregroundforeground class imbalance where multi classes exist the proposed method sounds technically the proposed method randomly sampled images from imbalanced dataset following the probability distribution which is computed before the training by solving a quadratic optimisation problem this optimisation finds probability distribution that gives equal expectation of class frequency for all classes the authors demonstrated the validity of the proposed method by comparing the detection performances among the proposed and the classical two subsampling and oversamplingbased method in the application to fetal anatomy detection in this comparison the proposed method improved the mean average precision for smallnumberimage classes more than the classical methods as the results the proposed method achieved the best performance over all anatomies among the three methods setting of data splitting in experiments is unclear for fair evaluation dataset should be split into training validation and test data without duplication of patients without this noduplication splitting we cannot evaluate the generalisation ability of the trained model the presented experiments look only the evaluation in training data im interested in the generalisation ability of the model trained with the proposed method only one dataset is adopted in experiments evaluations with a few dataset are welcome for the demonstration of the validity of the proposed method survey also looks limited docsepthe paper tackles an important problem of training machine learning models on imbalanced dataset a problem that is prevalent in medical imaging settings the paper is mostly wellpresented the results demonstrate improved results over the two other methods compared against showing potential for realworld use the major weakness of this paper is the lack of context for related work in this area this manifests in two ways both the introductiondiscussion and experiments the imbalanced dataset problem is a wellstudied problem with many existing rebalancing methods available the paper mentions only two foregroundforeground papers and does not compare the results with either of those existing methods a fairly common method is to oversample methods inversely proportional to its frequency yet this is not used as a baseline the paper would be significantly improved with more acknowledgement of the context of balancing methods and improved comparisons in the experiments at current only heuristic oversampling of two minority classes and uniform sampling is compared against the proposed methods these are decent positive controls but they do not place the work in the context of existing work note i am not setting a criterion that they must outperform all previous methods but the relative performance to alternative ideas is important docsepthe paper is generally well written and contains thorough proof of their method their method performs significantly better than baselines and requires only minimal overhead they train their method multiple times in order to calculate standard deviation and pvalues stating the significance of their results the baselines are very weak a follow up paper of one of their main references proposes a method regarding foregroundforeground class imbalance in object detection this is not mentioned in the paper and would make for a great comparison1 the relevance for the medical context could be discussed a bit more just using a medical dataset is not enough they evaluate only on one dataset figure 1 2 are not mentioned in the papers text 1 oksuz k cam b c akbas e kalkan s 2020 generating positive bounding boxes for balanced training of object detectors in proceedings of the ieeecvf winter conference on applications of computer vision pp 894903 docsepthe paper is very well structured and written i enjoyed reading it the research problem is stated clearly which is really appreciated the introduction is clear the related work section could be more detailed but still succeeds in positioning the work properly the presentation of the proposed solution is well written i appreciate the balance between textual explanations and mathematical formalism leaving advanced details for the appendixes the conversion of the sums into matrices in equation 1 builds bridges between the theory and the implementation in 321 the choice of the hyperparameters is well detailed in 332 the statistical analysis of the results presenting not only the best results but also the mean and standard is very good and something i would like to see more often in the papers the conclusion in 4 is well written presenting not only positive points of the method increased performance vs baseline but also negatives increased complexity due to the introduction of new hyperparameters what could be improved is mainly the experimental setup my main concerns are 1 why did you choose yolo and not fasterrcnn for example which yields better performance and why only yolo testing the method on different models would strengthen the paper a lot 2 also validating the approach on other datasets would make the paper much stronger 3 the yolo model used is pretrained on voc and finetuned on the fetal anomalies detection dataset could you detail more how the finetuning is done for example if any part of the model is frozen or how you handle the learning rate minor points 4 in table 2 how were the thresholds chosen and why only 2 5 in the discussion section i am missing a word about complexity vs performance the method increases the performance on the final task but increases the complexity of the training i am wondering if there is a way to quantify this increase of complexity for example by measuring the extra time or extra computation needed ### Summary:
the paper receives overall positive comments from four knowledgeable and independent reviewers they all like the novelty of the proposed work in addressing imbalanced sampling one of the most serious issues in medical image analysis however they also share the common concern that is lack of sufficient validation currently only one dataset is used the authors also provide a rebuttal about this while i agree with the argument they present time and resource constraints medical imaging focus etc it is not difficult to find another medical imaging dataset to test their idea therefore i strongly encourage the authors to conduct such an additional experiment to make their final version much stronger
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper tackles the problem of generative modeling by using langevin dynamics to sample from the denoising score function recently this family of approaches song and ermon 2019 song and ermon 2020 has shown promising and competitive results being positioned as a potential alternative to gans the paper introduces different improvements over song and ermon 2020 a different sampling dynamic consistent annealed sampling that produces a more stable training that the traditional annealing scheme by carefully scaling the injected noise second it is empirically shown that running a denoising step on the generated sample leads to an improvement of the fid score based on this observation the paper proposes to use a denoiser trained in an adversarial fashion to synthesize more realistic images the work addresses the very relevant problem of how to synthesize images in a realistic way introducing some modifications to existing works that lead to an improvement on the quality of the generated image the paper is well written presents a nice introduction to the method which allows to motivate the different modifications in a natural way the proposed modifications are analyzed in lowdimensional toy experiments and in smallscale images cifar lsunchurchers stackedmnist in what follows i list a few questions 1 would it be possible to analyze the strategy of sampling presented in kadkhodaei and simoncelli 2020 concurrent work and compare to the one proposed in the paper both strategies seem to improve the stabilization of the procedure by scaling the noise 2 regarding the step of applying the denoiser to the generated sample i wonder what happens if the denoiser is reapplied also is this connected to the fact that the denoiser may have a fixed point and this fixed point might lead to a better sample 3 regarding using an adversarial denoising in the denoising literature there are a few works connecting score matching and stateoftheart image denoisers i would like to see a better discussion of this for example see romano y elad m and milanfar p 2017 the little engine that could regularization by denoising red siam journal on imaging sciences 104 pp18041844 reehorst et and schniter p 2018 regularization by denoising clarifications and new interpretations ieee transactions on computational imaging 51 pp5267 after discussion i think this is a good paper and i would like to see it presented at iclr2021 docsepthe submission presents three contributions first the authors show the inconsistencies in the existing annealed langevin sampling used in scorematching generative models and propose to correct it with the newly proposed consistent annealed sampling cas algorithm the second contribution claimed is in providing evidence of the benefits of expected denoised sample eds furthermore the submission introduces a hybrid adversarial scorematching model that demonstrates improvements in terms of fid on simpler architectures the proposed cas algorithm is theoretically wellmotivated based on the observation that als is inconsistent with the scaling of the noise during sampling process although the question whether noise should follow none other than geometric progression is still an open question the paper is wellwritten and the ablation study is carried out well however it is a bit confusing as to whether the eds although under a different name denoising jump is a contribution of this paper or is something proposed prior to this work i understand that this denoising procedure has already been presented as a necessary technique in score matching models nevertheless i believe the authors contributed by showing that both als and cas move samples towards the eds proposition 3 and show additional empirical evidence of its benefits on synthetic and real datasets taking eds on the last langevin step diminishes the impact of cas doesnt bring unambiguous improvement in fid scores in the experiments otherwise very interesting finding both theoretically and algorithmically and substitute for als the effect of the hybrid model is also not persistent and depends on the architecture used for an incremental improvement a combination of two models the improvement is not consistent across architectures the paper does not explain whether there is a good rationale for such a combination therefore i remain sceptical about the results given all the above i am still leaning a bit towards accepting the paper as it covers an interesting finding relating to the als although the cas effect on performance is limited by the eds scorematching models are of broad interest for the iclr communitydocsepthe paper presents a novel approach for denoising score matching where the annealed langevin sampling has been substituted by consistent annealed sampling which adds more stability to the process the paper is in general clear and wellwritten the contributions are clearly highlighted and the proposed approach is conveniently compared with other state of the art methods demonstrating its superiority positive aspects the consistent annealed sampling proposed in this paper is more stable than the annealed langevin sampling the combination between gan and score matching improves the qualitydiversity of the generated sample negative aspects the limitation of the method to gaussian noise the presentation of a real scenario for your approach would have been a plus however i have some questions 1 who is nsigma parameter in algorithm 1 2 algorithm 1 line 4 there is no iteration over t in the loop 3 how does your denoising scheme work do you create noisy samples from your real data and try to denoise them using the proposed approach because taking a sample affected by random noise in the test phase i guess it wont work 4 the denoising scheme is used in a gan framework the denoised samples being perceived as real by the discriminator is the system trained endtoend or first you denoise the image and afterwards you train the gan 5 could you please indicate an application scenario which could benefit from this approach eg imagetoimage translation domain adaptation etc 6 your method is assuming gaussian noise can it be extended to the case of general noise a noise model which could be also learnt docsepthe article deals with generative models based on annealed langevin sampling rather than a gan theses models suffer from worse fid than gan authors proposed to denoise the last langevin samples to reduce the gap in performance with adversarial network the paper is really easy to read with good illustrations and supporting experiments in order to gain in comprehension especially for people new to als it would have been great if authors have proposed an illustration and comparison of the samples evolution along alg 1 and alg 2 authors are honest in their revised results comments but i dont known if they will be able to include the erratum in a final version as i was not aware before this review of annealed langevin sampling my rating may not be confident ### Summary:
this paper introduces an alternative to langevin sampling and also the idea of adversarial score sampling the reviewers are generally supportive of the paper pros the idea behind improving langevin sampling is theoretically justified and leads to a simple algorithm the idea behind adversarial score matching is also shown to be effective improvement over baseline cons two ideas packed into one paper which is reflected by the title as well from the narrative it could be thought that using eds on the last step of cas is the contribution of the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper designs an equation ie equation 5 in the paper to measure the impact or contribution of each participantagent in federated learning the designed measurement method is applied to attention aggregation algorithm of federated learning few experiments using penn treebank are conducted to support its claims this paper should be rejected because 1 the paper is unpolished and thus is hard to read 2 the novelty appears quite weak and 3 the experiments are difficult to understand and generally do not support its contributions concerns the paper is difficult to read due to the poor use of english many sentences are incomprehensible thus it was often impossible for me to determine exactly what the authors would like to say or describe please have your submission proofread for english writing style and grammar issues moreover please treat the equations as the parts of sentences and make sure that the caption formats of figures obey the iclr format i also have a serious concern about the novelty of this paper if my understanding is correct due to the aforementioned reason subsection 33 is the only new material proposed by the authors however the proposed equation ie equation 5 seems like a design choice without any theoretical justification or providing any intuitive reason which significantly degrades the novelty of this paper finally the experiments should be refined to support its main claims as claimed in section 1 the proposed measurement method is realtime and has low computational complexity however no experiment nor quantitative comparison addressing the running time and complexity between the proposed method and shapley value actually the authors compared their method with the method of approximating shapley value instead of exact shapley value furthermore please cite for shapley value papers docsepsummary the paper proposes a new contribution measurement approach for federated learning the basic idea is that the agent with a larger model update has a larger contribution specifically based on fedatt 1 the impact of a client is computed as the local updates plus the impact of the previous round times a decay rate the experiments on a dataset show that the proposed approach can have a similar contribution measurement compared with shapley value 1 learning private neural language modeling with attentive aggregation ijcnn 2019 strengths 1 the motivation of the paper is clear 2 the studied area is important effective incentive mechanisms in federated learning are still an open challenge weakness 1 the proposed idea lacks novelty and may not be applicable in general federated learning algorithms the contribution of each client is simply evaluated by its local update in fedatt fedatt is not a widely used federated learning algorithm currently it is not clear whether the proposed approach is applicable to other standard federated learning algorithms such as fedavg also i do not understand why the paper focuses on fedatt instead of fedavg 2 the paper lacks reasonable explanations for the proposed approach a client may have arbitrary bad data and the local updated model may be far from the global optimal model in such a case since the distance between the local model and the global model is large the contribution is also large according to the proposed approach which is not reasonable it is not clear how the proposed approach can handle such cases 3 the experiments are weak and not clear a it is not explained how the agent contribution rate is computed b the experiments are conducted on a single dataset more datasets are needed c from figure 2 it is hard to say that the proposed approach has a similar measurement with sv d since the motivation is to reduce the computation overhead the authors should show compare the computation complexity or the computation time of the proposed approach and sv minor issues 1 the writing can be improved eg such for example 2 figure 1 is not referred to in the text 3 figure25 orange and blue colors are not explained docsepthe paper is to measure each clients contribution to training the federated learning model in particular the contribution is measured by the distance between the local model and the global model in each iteration the targeting problem is interesting and the use of attentionbased model divergence is also an interesting idea to measure the contribution however the paper lacks strict theoretical discussion to prove the proposed solution is a reasonable one rather than a heuristic method moreover the experiment is too weak to support the claims the papers technique contribution and originality are also limited below are some detailed concerns 1 the authors need to make a clear definition of the assumed application scenario so that the below problems can be avoided or solved if the clients contribution is linked to rewards it is unavoidable that some clients will produce fake data to gain more contribution to the commercial federation system therefore the paper should discuss the prevention of attacking by fake data for example if the client randomly shuffles the index of neurons in the trained local model wk then the clients local model will get a bigger skl calculated by equation 2 thus this client is likely to gain a big reward at every iteration according to equation 5 the contribution at the early stage will be discounted it is unfair for the clients to be selected at an early stage therefore from a systematic perspective some clients may refuse to contribute to the training process at an early stage 2 contribution is not enough the core method comes from the fedatt algorithm an attentionbased federated aggregation method the papers primary contribution relies on section 33 to measure the contribution according to the gradients 3 the experiments are too weak to support their claim more datasets and baseline methods are required for example the femnist feceleba it is unclear how to define an objective metric to measure the quality of the proposed method the contribution is a subjective feeling that various to different tasks and assessordocsepthe paper proposes a low computational complexity method for weighting contributions of clients in a federated learning setting the main contributions are to compare the weighting method with shapley values and their sensitivity to low data volume and quality the paper is based on the fedatt paper that calculates weights based on the euclidean distance between the server model and each client and for each layer the experimental setup is well described including details about the hardware software datasets model and evaluation criteria however the model only specifies a smaller grubased model without giving any details of what that model is they do not clearly describe some parameters of the approximation of the shapley value calculation reducing the value of the comparison between fedatt and shapley values they could also have taken additional steps to improve the claims confidence eg only one dataset was used which is relatively weak compared to the original fedatt paper the graphs in the results section could be described with more detail to explain what eg the colors of the special agents mean also there are no confidence measures specified making it hard to evaluate the claims validity the references include essential papers but are missing some core references such as federated learning and shapley values themselves also related papers such as active federated learning by goetz et al talk about very similar ideas but lack any mention in the paper the language and grammar could be improved and some of the formulations make it hard to read the comparison to shapley values is also not motived in any detail thus further reducing the paper contributions value ### Summary:
although this paper tackles an important problem all reviewers agree that it requires further work before it can be published first the paper would need to be polished in order to be easier to read stronger experiments would also be needed in order to support the claims of the paper eg by considering additional datasets and proper baselines finally an important concern about this paper is novelty and originality it is not clear at this point that the contribution is substantial enough for a conference like iclr addressing these points would significantly improve the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes another variant of phrasebased mt for african languages involving native speakers for manual annotations instead of just using subwords or statistical phrase identification the authors propose to use the intuition of native speakers for translating african fon languages into french and vice versa according to their experiments bleu and other indexes significantly improved over standard ibm1 phrasebased machine translation however from the description and examples in this paper i have a little doubt for this improvement for creating the aligned corpus the authors say that they chose only short expressions namely 16 words according to the results shown in table 1 this essentially amounts to simply memorizing frequent idiomatic phrases therefore improvements with this kind of human intervention over such an easy sentences is basically trivial of course the paper says that the test data comprises of long and complex sentences but the examples are not thus i cannot identify whether the problem is really difficult or not even if the proposed human annotation is effective that does not seem to leverage characteristic property of african languages in section 3 un do ganji has an ambiguities about un but this kind of ambiguity of a word is shared by almost all the other languages imagine translating given in a conditional proposition the property of african fon languages such as diacritics and affixation are not used here finally the proposed annotation algorithm in page 3 seems to quite vague to me where v came from if w is a word what is the meaning of w subseteq v also this algorithm seems to use a simple longest match however in many cases the usage of a word is only clear using succeeding words ie some forwardbackward algorithm is necessary for correct identification of a phrase that being said i strongly agree with the authors that neural machine translation of african lowresourced language is important i hope that the authors would add more persuasive results and analysis to realize a practical translation of fon languages docsepthe authors investigate different tokenization methods for the translation between french and fon an african lowresource language this means that they compare different ways to construct the input and output vocabularies of a neural machine translation nmt system they further propose their own way to create those units based on phrases which is called web the nmt system the authors use follows bahdanau et al 2015 it is a gru sequencetosequence model with attention the dataset they use has been created and cleaned by bilingual speakers and consists of roughly 25k examples this is a really small dataset for nmt so the authors are taking on a really hard task web works in the following way after phrases have been found automatically bilingual speakers analyze what the longest phrases which correspond to translated phrases in the other language are only the longest phrases for each example are kept for the final vocabulary the authors show that web improves the performance in both translation directions by a lot on all metrics clearly showing that the work they invest into creating the vocabulary pays out thus i think this work is important to be able to provide speakers of fon with a functioning translation system however i am unsure if this work is suitable for a machine learning conference while the overall goal of this work is to create an nmt system the main contribution is the manual cleaning of the dataset and semimanual creation of the vocabularies i would recommend to the authors to submit this paper to a conference with a stronger focus on nlp and nlp resources maybe lrec i further want to emphasize that i think work like this paper is incredibly important and the authors shouldnt feel discouraged importantly the manual labor needed for web has been a lot and its obvious that it helps for nmt i just dont think that this paper is a good fit for iclr minor point has the creation of web access to the test data if so the authors should change that or collect new test data to ensure a fair evaluation docsepedit after seeing others reviews i think i gave this paper a much higher score than the other reviewers simply because it is very novel with fon language i agree with all of your points about what is lacking but in my mind the novelty was enough to still give a 7 now i definitely think that is too high i think this paper can reasonably be rejected but id like to give actionable of constructive criticism since i do think the work on this low resource language is important for the nlp community with such low resources we cannot expect the same type of work as we would for other languages overview this paper discusses the problems of common tokenization strategies for low resource african languages and proposes a new tokenization method to overcome these problems they train low resource nmts using 4 different tokenization strategies to show that their proposed tokenization method leads to the best nmt results by several metrics contribution the authors contribute a new tokenization method code and a dataset the good very interesting and important work many people will be excited to use this data paper is mostly clearly written and easy to read the paper flows well someone with this paper could reproduce the work more or less the bad figure 1 is difficult to read and messy first by input you actually mean source the input would be the source sentence with its appropriate tokenization no also i think putting the english translation in a different font or color would be greatly helpful to our eyes i really think this must be fixed figure 1 is presently not pleasant to look at even though it has interesting results section 4 i think you really need to restate that the algorithm has a humanintheloop for clarity before describing your algorithm humans are only mentioned once in the algorithm indeed at first the words the following algorithm confused me because i thought it was more a methodology since step 2 is where the humans are in the loop unless you have a fon pos tagger and i am misunderstanding but then at the end i saw you include encode as step 4 so it is the machine the fact that i flustered a bit with my understanding here was confused and had to spend a few minutes thinking about it means it needs a bit of tweaking maybe add a comment saying step 2 is the humanintheloop step of the algorithm suggested additions i think more specific linguistic details about fon are missing for example if you could give us one or two sentences of fon in the beginning of the paper that demonstrate some of the difficulties of the language i think this would greatly strengthen the motivation you tell us that fon is a language with special tokenization needs and that standard tokenization methods do not alwaysadequately deal with the grammatical diacritical and tonal properties of some african language and you cite the relevant papers but i would still like to be shown i think just including two sentences that have some of these features and that gets the point accross of how would we tokenize this would really help the motivation its not that ireaders dont believe you when we are told but being shown makes it much more interesting and give people an appreciation for fon tokenization challenges can we get any information about how the annotators were trained i think this is standard for such papers other smaller suggested fixes section 5 near the end little grammatical mistake bunch of those errors has should be errors have section 63 please change the results from table 2 and table 1 to say table 1 and table 2 it does not make sense to list them out of order i also think it makes sense to switch figure 1 and figure 2 entirely ie figure 1 should be your results table and figure 2 should be the examples for us to see section 63 slightly confusing wording the second sentence is confusing to me and i am a native english speaker it is important to note that while bleu of other methods reduced on the frfon task wb improved on it to me saying blue reduced for the other methods means that you have some other baseline you are comparing to am i missing something are you comparing against fon fr questions section 62 does it really take all 500 epochs to run or do you have early stopping at some point when the loss flatlines because bpe is such a standard baseline why do you not include it as a baseline i know you cite the abbott martinus 2018 paper stating that bpe is bad for analytical languages but i still think it would prove a point to show bpe performing badly for your data overall very interesting work and cant wait to see this data be used i think the paper could be greatly strengthened by taking some time to include an example that demonstrates the linguistic and typological features of fon that makes it difficult ### Summary:
the authors investigate different tokenization methods for the translation between french and fon an african lowresource language lowresource machine translation is a very important topic and it is great to see work on african languages we need more of this unfortunately the reviewers unanimously agree that this work might be better suited for a different conference for example lrec since the machine learning contributions are small the ac encourages the authors to consider submitting this work to lrec or a similar conference
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in the paper rotograd is proposed as a new gradientbased approach for training multitask deep neural networks based on gradnorm gradnorm is first formulated as a stackelberg game where the leader aims at normalizing the gradient of different tasks and the follower aims at optimizing the collective weighted loss objective under this formulation one can utilize theoretical guarantees of the stackelberg game by making the leader have a learning rate that decays to zero faster than the follower to further account for the different gradient directions a learnable rotation and translation are applied to the representation of each task such that the transformed representation match that of the singletask learning by adding an additional term accounting for learning this rotation the leader in the stackelberg game will minimize the loss to homogenize both the gradient magnitude and match the representation to singletask learning as close as possible in general i find the direction of gradient homogenization for multitask learning very important and interesting the paper provides an interesting perspective through the stackelberg game formulation which provides a framework for selecting the learning rate of gradnorm type of gradient homogenization methods the other contribution of the paper is a learnable taskspecific rotation that aligns the task gradients with singletask learning the proposing of a learnable rotation matrix seems an interesting idea although i am not sure if it has been proposed previously for multitask learning i find the first contribution of formulating the problem as a stackelberg game to be interesting and novel however in terms of the second contribution i have some concerns about whether it makes the most sense by aligning the transformed representation with that of singletask learning for mtl one of the key benefits is learning a better representation by sharing it across different tasks to encourage helpful transfer between the tasks by constraining the transformed representation to be as close to the singletask learning representation it might limit the transfer between tasks since the representation are constrained to be equivalent to that learned by singletask learning i think it is helpful to think about using rotation invariant representations for aligning the gradient directions but it is questionable to align it to that of the singletask learning another major concern is about the experimental results full experiments are only conducted on one realworld dataset the experiment on the second dataset seems to be very preliminary which might not be sufficient to justify the proposed method empirically also on the second dataset it seems the two different implementations of rotograd have a large discrepancy in the results which might need more investigation about why this happens meanwhile many ablation studies seem to be missing i am mostly interested to see experiments that validate the stackelberg game formulation for example by using different learning rates for the leader and the follower also it would be interesting to see how the proposed rotograd compares with pure gradnorm on gradient direction alignment overall i feel the experiments are not complete for validating the effectiveness of the method some minor points the description of dgrad method seems to be missing also yu et al 2020 also deals with gradient aligning for mtl which could be considered as a baseline to compare with yu t kumar s gupta a levine s hausman k finn c 2020 gradient surgery for multitask learning arxiv preprint arxiv200106782 after authors response i am not fully convinced by the explanation of the motivation behind rotation matrix in particular why it is aligning with the singletask learning which is counterintuitive the authors provided more ablation studies however the evaluation on datasets is still quite preliminary with some questions remaining such as why there is a discrepancy between the two versions of rotograd on the second dataset therefore i am keeping my original score docsepthis paper presents an extension of gradnorm to address task conflicting due to discordant gradient direction specially it introduces a rotation matrix to rotate the hidden representation from the last shared layer the authors put the proposed method in the context of game theory to show stability and convergence of the training which might be of merit the writing of the paper doesnt meet the publication standard needing major work to improve there are many typos and awkward sentences hindering understanding of their work also there are many places that need clarification for example in proposition 41 the inverse of the gradient of z with respective to theta needs to be calculated so what is the shape of this gradient matrix how it is necessarily to be a square matrix what deltatheta z represents the fnorm there is lack of adequate explanation of the motivation behind the objective in eq 6 by reading the paper i have no idea about the two oracle functions and why they are defined in the way shown in eq 8 eq 3 is inaccurate not aligning with that proposed in the gradnorm paper for the computation of lgradk eq 9 is problematic why rk zit does not appear in the objective function of the first optimization problem if this is because zikt rk zit dk then the objective in the second optimization problem would be just 0 why operating on z instead of the gradient in gradnorm can resolve the discordant gradient issue among tasks is not properly justified the reported empirical results are weak and do not support this method works as claimed docsepsummary this paper proposes an mtl method that encourages the gradients on shared parameters to have similar directions across different tasks the motivation is to reduce conflicts between gradients of different tasks so that training can proceed more smoothly and fit multiple tasks more easily the paper introduces a new way of thinking about this kind of method ie through the lens of stackelberg games which could be useful in reasoning about the convergence of such methods the method is shown to perform favorably against related methods especially in regression settings strong points minimizing gradient conflict is a wellmotivated way to reduce negative transfer the algorithm description is detailed and should be straightforward for others to implement stackelberg games are an interesting framework for thinking about methods like gradnorm and rotograd that adaptively guide mtl training weak points the theory is interesting at a highlevel but it is not clear that it provides insights on what makes rotograd work in the paper one main takeaway from the stackelberg games framework is that the methods converge if the leaders learning rate is asymptotically smaller than the followers this takeaway is implemented by decaying the leaders learning rate but it is not shown that this is a key point required for rotograd to work i would not be surprised if the results were unaffected if this decay were removed if this point is really important it should be illustrated in ablation studies more broadly since the point does not only apply to rotograd this ablation could also be done on gradnorm and other methods such ablations would be one way to connect the theory to the methods another main takeaway from the theory is that the rotation matrices and translation vectors should be updated with gradient descent instead of simply replacing them each step intuitively the algorithm would still make sense and be simpler if r and d were simply replaced experiments showing that the gradientdescent update rule is necessary would help show the value of the theory similarly the value of proposition 41 is not clear is it to prove stability does this have some particular connection to rotograd or is it a useful fact about hard parametersharing methods in general there is one ablation rotogradsgd but it is not clear how exactly it works can it simply update r and d however it wants or is eq 9 still used to regularize the updates in some way by adding the rotation matrices its possible that information that would be useful to share across tasks is instead stored in these taskspecific matrices that is conflict between tasks can beneficially lead to more general representations restricting r to be a rotation instead of any matrix is one step towards limiting the amount of information leakage into taskspecific parameters is there a conceptual reason to expect that the benefits from reducing conflicts will outweigh this leakage the experiments are on an intentionally very small architecture where one of the main issues is expressivity which gives rotograd an edge over methods that do not include an additional taskspecific matrix in section 51 does the method without rotograd do poorly because there are no taskspecific networks in that case although rotograd is motivated to reduce negative transfer table 1 shows that rotograd does not reduce negative transfer but rather improves positive transfer that is uniform does better than rotograd in the tasks where singletask is better than multitask but rotograd does better than uniform in the tasks where uniform is already better than singletask this makes me think that the benefits of rotograd are not coming from reducing negative transfer but from somewhere else is there an explanation for why rotograd does not work as well for multiclass classification tasks ie performs worse than all other methods for left and right is it because the taskspecific heads have larger output sizes eg could it be better to have a separate rotation matrix for each class figure 4 in a3 confirms that there is an issue here the cosine similarity is not higher for rotograd for the classification tasks overall from the limited scope of the experiments it is not clear that rotograd would provide practical advantages over competing methods the chestxray experiments show that although rotograd does not hurt much it does not help overall compared too uniform that said it would be still be interesting to see whether insights from stackelberg games could lead to practical improvements for this problem minor comments the writing has some issues these issues dont make the work unclear but they are a bit distracting some example suggestions for fixing distracting word choice palliate alleviate spoiled noted we have not being able to propose rotograd but also to derive we have proposed rotograd and derived there is also frequent nonstandard mixing of em dashes with spaces and commas rktalpha is a hyperparameter alpha is a hyperparameter the hyperparameter is alpha correct update i am very happy to see the new experiments that validate the implications of the stackelberg games theory the main drawback of the paper is that it is not clear that direction homogenization could lead to practical improvements for multitask learning the additional experiments in table 2 are useful and suggest that much of the benefit comes from the greater expressivity due to taskspecific matrices ### Summary:
the paper is proposing a novel representation of the gradnorm gradnorm is presented as a stackelberg game and its theory is used to understand and improve the convergence of the gradnorm moreover in addition to the magnitude normalization a direction normalization objective is added to the leader and a rotation matrix and a translation is used for this alignment the paper is reviewed by three knowledgable reviewers and they unanimously agree on the rejection here are the major issues raised by the reviewers and the are chair the motivation behind the rotation matrix layers is not clear it should be motivated in more detail and explained better with additional illustrations and analyses empirical study is weak more state of the art approaches from mtl should be included and more realistic datasets should be included the proposed method is not properly explained with respect to existing methods there are mtl methods beyond gradnorm like pcgrad and mgda mtl as moo these methods also fix directions hence it is not clear what is the relationship of the proposed method with these ones i strongly recommend authors to improve their paper by fixing these major issues and submit to the next venue
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies the effort of anisotropic noise in stochastic optimization algorithms the goal is to show that sgd escapes from sharp minima due to such noise the paper provides preliminary empirical results using different kinds of noise to suggest that anisotropic noise is effective for generalization of deep networks detailed comments 1 i have concerns about the novelty of the paper it builds heavily upon previous work on modeling sgd as a stochastic differential equation to understand its noise characteristics the theoretical development of this manuscript is straightforward until simplistic assumptions such as the ornsteinuhlenbeck process which amounts to a local analysis of sgd near a critical point and a neural network with one hidden layer similar results have also been in the the literature before in a number of places eg httpsarxivorgabs170404289 and references therein 2 proposition 4 looks incorrect if the neural network is nonconvex how can the positive semidefinite fisher information matrix f sandwich the hessian which may have strictly negative eigenvalues at places 3 section 5 contains toy experiments on a 2d problem a one layer neural network and a 1000image subset of the fashionmnist dataset it is hard to validate the claims of the paper using these experiments they need to be more thorough the appendix contains highly preliminary experiments on cifar10 using vgg11 4 a rigorous theoretical understanding of sgd with isotropic noise or convergence properties of lagevin dynamics has been developed in the literature previously itd be beneficial to analyze sgd with anisotropic noise in a similar veindocsepthe authors studied the effect of the anisotropic noise of sgd on the algorithms ability to escape from local optima to this end the authors depart from the established approximation of sgd in the vicinity of an optimum as a continuoustime ornsteinuhlenbeck process furthermore the authors argue that in certain deep learning models the anisotropic noise indeed leads to a good escaping from local optima proposition 3 2 seems to assume that the eigenvectors of the noisecovariance of sgd are aligned with the eigenvectors of the hessian did i understand this correctly and is this sufficient maybe this is actually not even necessary since the stationary distribution for the multivariate ornsteinuhlenbeck process can always be calculated gardiner mandt hoffman and blei 20152017 i think this is a decent contribution docsepthe paper studies the benefit of an anisotropic gradient covariance matrix in sgd optimization for training deep network in terms of escaping sharp minima which has been discussed to correlate with poor generalization in recent literature in order to do so sgd is studied as a discrete approximation of stochastic differential equation sde to analyze the benefits of anisotropic nature and remove the confounding effect from scale of noise the scale of noise in the sde is considered fixed during the analysis the authors identify the expected loss around a minimum as the efficient of escaping the minimum and show its relation with the hessian and gradient covariance at the minimum it is then shown that when all the positive eigenvalues of the covariance matrix concentrate along the top eigenvector and this eigenvector is aligned with the top eigenvector of the hessian of the loss wrt the parameters sgd is most efficient at escaping sharp minima these characteristics are analytically shown to hold true for a 1 hidden layer network and experiments are conducted on toy and real datasets to verify the theoretical predictions comments i find the main claim of the paper intuitive at any particular minimum if noise in sgd is more aligned with the direction along which loss surface has a large curvature thus the minimum is sharp along this direction sgd will escape this minimum more efficiently on the other hand isotropic noise will be wasteful because a sample from isotropic noise distribution may point along flat directions of the loss even though there may exist other directions along which the loss curvature is large however i have several concerns which i find difficult to point out because many equations are not numbered 1 in proposition 2 it is assumed under the argument of no loss of generality that both the loss at the minimum l00 and the corresponding theta0 0 can the authors clarify how both can be simultaneously true without any loss of generality 2 a number of steps in proposition 2 are missing which makes it difficult to verify when applying itos lemma and taking the integral from 0 to t it is not mentioned that both sides are also multiplied with the inverse of expht 3 in proposition 2 when computing elthetat on page 12 the equalities after line 3 are not clear how they are derived please clarify or update the proof with sufficient details 4 it is mentioned below proposition 2 that the maximum of trh sigma under constraint 6 is achieved when sigma trsigma lambda1 u1u1t where lambda1 is the top eigenvalue of h how is lambda1 a factor in sigma i think sigma should be trsigma u1u1t because this way the sum of eigenvalues of sigma remains unchanged which is what constraint 6 states 5 the proof of proposition 5 is highly unclearwhere did the inequality g0theta2 deltautfu odelta come from also the inequality right below it involves the assumption that utg0 g0u g02 and no justification has been provided behind this assumption regarding experiments the toy experiment in section 51 is interesting but it is not mentioned what network architecture is used in this experiment i found the experiments in section 53 and specifically fig 4 and fig 7 insightful i do have a concern regarding this experiment though in the experiment on fashionmnist in fig 4 it can be seen that both sgd and gld 1st eigvec escapes sharp minimum and this is coherrent with the theory however for the experiment on cifar10 in fig 7 experiment with gld 1st eigvec is missing can the authors show the result for gld 1st eigvec on cifar10 i think it is an important verification of the theory and cifar10 is a more realistic dataset compared with fashionmnist a few minor points 1 in the last paragraph of page 3 it is mentioned that the probability of escaping can be controlled by the expected loss around minimum due to markovs inequality this statement is inaccurate a large expected loss upper bounds the escaping probability it does not control it 2 section 4 is titled the anisotropic noise of sgd in deep networks but the sections analyses a 1 hidden layes network this seems inappropriate 3 in the conclusion section it is mentioned that the theory in the paper unifies various existing optimization mentods please clarify overall i found the argument of the paper somewhat interesting but i am not fully convinced because of the concerns mentioned above ### Summary:
the reviewers point our concerns regarding papers novelty theoretical soundness and empirical strength the authors provided to clarifications to the reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a deeply supervised fewshot learning model via ensemble achieving stateoftheart performance on miniimagenet and tiredimagenet the authors first studied the classification accuracy on miniimage across convolutional layers and found the network could perform well even in the middle layer therefore they added classification headers on the selected layers so that these layers can directly output predictions the final result is the ensemble of all the select layer predictions called the multiple representation emsemble to improve the result they further average the results of two models with different network backbones called multimodel emsemble the results show this method can achieve stateoftheart performance on the two datasets advantage 1 the motivation and idea in this paper are clear and simple so the reader is easy to understand it 2 figures 2 and 3 are nice which are clearly demonstrate the motivation and algorithm 3 the find in figure 2a is very interesting the middle layer has a better representation than the end on the fewshot image classification task 4 the results are positive disadvantage 1 the idea in the paper is not very novel the main contribution of this paper is doing a deep supervision ensemble however people have studied deep supervision learning for a while on image classification 1 segmentation 2 and depth estimation 3 specifically 2 3 also fuse the multilayers outputs 2 the authors only show the ensemble results via averaging scores over the models it will be good to study more ensemble methods for example the deep layer has higher accuracy than the shallow layer is it possible to assign a different ensemble weight for each layer based on the accuracy 3 in figure 2a why the middle layer performs better than the last layer it will be good to show some analysis 4 in table 1 since the proposed model has done a model ensemble it cannot directly compare with can and ctm should add the result without ensemble in table 1 if i put the thirdrow result 6403 in table 2 to table 1 the improvement would be marginal 5 both miniimagenet and tiredimagenet are the subsets of imagenet to verify the generalization it will be good to add cifar metainat 4 or cub 5 results minor mistakes 1 equation 1 should add the superscript n to r 2 figure 1 the characters are not evenly spaced 3 figure 2 a the axis label is too small 4 in section 41 the sentence the model can be pretrained dtrain or dval is redundant which is common sense 5 in section 51 after pretraining we added shift and scaling parameters for the convolutional layers in the encoder and trained the parameters by the mtl approach used in might add more details about the shift and scale so that the reader does not have to read another paper 6 table 1 the standard deviations in our results are not aligned post rebuttal the authors havent addressed my questions i would keep my score unchanged one more comment i suggest the authors compare to a related baseline simpleshot 6 that is arguably less complicated overall given that the novelty and improvement are minor i think this paper might be not ready at this time 1 lee chenyu et al deeplysupervised nets artificial intelligence and statistics 2015 2 xie saining and zhuowen tu holisticallynested edge detection proceedings of the ieee international conference on computer vision 2015 3 chang jiaren and yongsheng chen pyramid stereo matching network proceedings of the ieee conference on computer vision and pattern recognition 2018 4 wertheimer davis and bharath hariharan fewshot learning with localization in realistic settings proceedings of the ieee conference on computer vision and pattern recognition 2019 5 wah catherine et al the caltechucsd birds2002011 dataset 2011 6 wang yan et al simpleshot revisiting nearestneighbor classification for fewshot learning arxiv preprint arxiv191104623 2019 docsepsummary the authors propose to tackle the problem of fewshot learning fsl using ensembling diverse classifiers the diverse classifiers are obtained using the outputs from different intermediate layers of a pretrained cnn feature extractor or multiple cnns as a result the authors demonstrate stateoftheart accuracy on two miniimagenet and tieredimagenet datasets pros the idea totally makes sense since in fewshot learning the test distribution may be quite different from the training one hence employing lowerlayer features that are more classinvariant must be helpful even though the space of semantic concepts learned by earlier layers is probably not as reach as for the deeper layers the results in miniimagenet and tieredimagenet are impressive the experimental section is informative and clear the paper is well written and is easy to follow cons limited contribution none of the introduced ideas in this paper is novel for example the idea of using ensemble methods for fsl was introduced in 1 then the idea of aggregating information from intermediate layers of a feature extractor to build a reacher classifier for fsl was introduced in 2 the authors of 3 also used intermediate layers for better classification results basically the contribution of the current work is to combine the ideas of 1 and 2 while using a different backbone network a new resnet18 and a different classifier relationnet i would call the need to manually select the layers from which to build classifiers a downside of the approach since selecting all representation would lead to degraded performance overall i like how the paper reads however the contribution of this work boils down to combining existing ideas and methods into a new pipeline which i dont find sufficient for the iclr acceptance standard 1 dvornik etal diversity with cooperation ensemble methods for fewshot classification 2 dvornik etal selecting relevant features from a multidomain representation for fewshot classification 3 rusu etal metalearning with latent embedding optimizationdocsepthanks to the authors for providing such an ensemble approach this paper aims to find a way to directly utilize representations with the classification layers to obtain better performance the ensemble method is able to create an ensemble of classifiers and the ensemble achieves the new stateoftheart results in a fewshot setting comparing to previous regular and ensemble approaches this topic is very straightforward and would be very easy for the audience to understand while the results might not be that convincing enough the biggest concern is the contribution of this paper to be more specific the proposed method might not be useful and might need to be tuned in other fewshot settings the miniimagenet and tieredimagenet results are good while the authors could provide more evidence to show its strength and how to balance the computation and model performance for the experimental setup it is good and reproducible however when digging deeper the reason for the ensemble is that we want to find a way to calculate the features through different classifiers maybe this is because a single classifier is not able to learn all the features from the images at once but why it is necessary to use this approach it also needs pretraining instead of using a more powerful network to achieve a similar performance it is really good to see those analyses on single encoder multiple representation multiple encoder multiple representation and selection of encoders and representations for the ensemble it would be suggested if the author could give a detailed interpretation of the selected layer and how it could be used in other settings this paper is wellwritten with not many typos the topic is inspiring and interesting while it is not clear how the ensemble could help fsl tasks the improvement is not obvious and the results are not enough also it would be better the authors could provide more analysis about why this ensemble works it would be better the authors could give an analysis of the hyperparameters of the proposed method for example in 53 selection of encoders and representations for the ensemble 093 but how the model performs when is different and how we could find an appropriate when doing ensemble the authors should provide enough support to justify the validity of the methods and why this method is worth doing in comparison with other methods also it is worth discussing other aspects such as flops params etc docsepthe authors propose a simple approach which obtains competitive results with the state of the art of few shot learning however i have the following concerns the proposed method is somewhat incremental the authors propose to average the predictions of classifiers that take as input different features from the backbone while its a sensible thing to try in my understanding the proposed method is equivalent to a simpler approach that would simply concatenate those features and learn a classifier on the concatenated features i believe that this approach and a number of other simple baselines employing a wider representation space extracted from the backbone would be important to strengthen the analysis of the proposed method the presentation of state of the art results is incomplete dvornik et al also report results for tieredimagenet which surpass the reported results some other relevant works would need to be cited and compared to 1 2 some of their results also surpass the reported results organisation method and results should be presented separately the current flow of the paper alternates between empirical findings motivation a formal approach methodology and experimental results this structure suggests that the submission would likely be better suited for a more technical venue it would also better to isolate in a background section the presentation of the baseline approach sung et al 2018 before presenting the proposed method itself to make it more evident what the contributions are the authors do not motivate the chosen experimental setting fsl there is no analysis of why the proposed approach should be particularly well suited to address the specificities and challenges of this task it seems to me that employing so many linear classifiers on increasingly larger dimensional features would lead to a large increase in parameter count but the authors do not perform any analysis regarding this aspect overall a lot of polishing of the paper is needed prior to publication please find a few comments in that respect below comments figure 2 what is resnet18 in red if its not v1 or v2 on this note both papers should be cited when they are introduced in section 3 only resnet v1 is cited our ensemble contains multiple encoders encoders of different network structures at this point this is not very clear is the method used on top of a traditional ensemble multimodel multirepresentation ensemble sounds tautological abolition should probably be ablation 1 fewshot learning via embedding adaptation with settoset function ye et al cvpr 2020 2 adaptive subspaces for fewshot learning simon et al cvpr 2020 ### Summary:
this paper introduces an ensemble method to fewshot learning although the introduced method yields competitive results it is fair to say it is more complicated than much simpler algorithms and does not necessarily perform better given that ensembling for fewshot learning has been around for a while it is not clear that this paper will have a significant audience at iclr sorry about the bad news ac
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: seems like the most direct way to estimate mutual information using a classifier i like this work because it is much more straightforward than the prior work such as mine it shows sufficient performance on the experiments showndocsepthis work suggests a new discriminative mutual information estimator that relies on a classifier to directly estimate the log density ratio of pxypxpy without variational lower bound in general the idea is easy to follow and simple simulations are done to demonstrate its effectiveness however i still have some concerns 1 a classifier based mi estimator reminds me of a closely related problems the independence test for the latter there are also a few recent proposals based on a classifier to distinguish pxy from pxpy i understand the methodologies are different but i still feel some motivations are similar it would be better if authors can clarify this point 1 lopezpaz david and maxime oquab revisiting classifier twosample tests iclr 2017 2 sen rajat ananda theertha suresh karthikeyan shanmugam alexandros g dimakis and sanjay shakkottai modelpowered conditional independence test neurips 2017 2 authors discussed the theoretical optimum of their estimator when the number of samples approching to infinity in the simulations it seems that the number of training samples is also very large eg 160k what will happen in case of moderate or small number of samples 3 for me the simulation on the selfconsistency tests does not demonstrate a big advantage of demi especially considering that a few competitors are not included eg gm mentioned in song and ermon 2019 on the other hand lots of work on variational mi including this one claim the great potential on representation learning with either mutual information maximization or information bottleneck however validations are totally missing in this sense it would be much better if authors can provide a simple representation learning demo just like hjelm et al 2018 what will happen if we replce mine with demi 4 it seems from fig 1 the advantage of the estimator becomes more obvious with the increase of dimension can authors provide some explanation or theoretical analysis 5 it seems to me the work is prepared in a quick time there are a few typos eg the 6 line in the second paragraph of page 3 hatpxyhatd should be hatpyhatd the clarity and location of figures can be improved docsep summary this paper proposes demi a discriminative approach to estimate mutual information mi the main idea is that instead of learning generative distributions of joint and marginals learning a single likelihood ratio that is discriminative and hence more tractable a posterior pz x y trying to distinguish between the joint distribution px y and the product distribution pxpy once the posterior is learned it can be used to estimate the mi strength this paper studies a very important problem for the representation learning community mutual information has been a very powerful principled technique for deep representation learning and many applications but there are many challenges in scalable and accurate lowvariance estimation therefore developing an accurate mi estimator is of high importance and significance i find the idea of lifting the distribution and converting the mi estimation problem into a discriminative setting interesting and looks novel the method makes sense and the training procedure is very simple achieving better estimation than the baselines this paper is wellplaced and contains comprehensive discussion of recent works about the limitations and research challenges on mutual information estimation the mathematical connection to existing methods mine infonce smile etc provide an interesting insight weakness the method only discusses estimation of mutual information not maximization of mi for representation learning the biggest weakness of this paper would be experiments the training data used in the experiments is either lowdimensional or synthetic so there remains a question about how well this method will scale to a highdimensional and challenging deep learning setting as in song eromn 2019 empirical analysis on the cifar10 dataset would be needed if provided my rating would increase bias and tradeoff analysis similar to song ermon 2019 is missing question the hyperparameter alpha is said to be set to 10 section 3 which does not seem feasible based on the equation 5 7 was it meant to be 05 can the authors clarify on this also i am curious how sensitive demi is on the choice of the prior hyperparameter alpha this would be a good analysis to have for the completeness of the paper additional comments section organization i suggest having an introduction as a separate section with methods being the following section section 3 experiments and 4 results can be combined for section 2 related work a different name could be considered because the main content here additionally includes a theoretical connection to existing approaches which is in fact an important contribution of the paper the plots in figure 2 are not properly scaled with too many lines overlapping one another i suggest the authors improve the plot for better readability typo in section 42 overalll please place a whitespace after the column demi in the titledocsepthis paper proposed a discriminative estimator for mutual information to alleviate the shortcomings of the existing estimators such as mine and smile a classifier was built to decide whether the sample is drawn from the joint distribution or the independent one product of marginals theoretical justification and experimental results were provided to support the proposed estimator the paper was written with clarity and easy to follow here are some detailed comments on the technical contribution of this paper 1 there is a closely related piece of work in the literature see below they also proposed a discriminative estimator for kl divergence with mutual information as a special case it would be nice if the authors could relate to this existing work and provide experimental comparison to their estimator mukherjee sudipto himanshu asnani and sreeram kannan ccmi classifier based conditional mutual information estimation in uncertainty in artificial intelligence pp 10831093 pmlr 2020 httpsarxivorgpdf190601824pdf 2 from figure 1 right column we see that all estimators including the one proposed underestimate the mutual information when it is high could the authors give more analysis and explanation on this phenomenon 3 it would be nice if the authors could provide experimental results on more realistic datasets and show the advantage of the proposed estimator when it is used for other downstream tasks often estimating the mutual information is not the end goal but an intermediate step to achieve other goals see the mine paper for examples 4 a minor point in equation 10 the last part it should be 1z log 1 q instead of 1z 1 logq ### Summary:
in the paper the authors propose a new method for estimating the mutual information based on a neural network classification that is fairly straight forward the proposed method compares relatively well with known methods for estimating mutual information with a very large number of samples the main issue of this classifier a neural network is that it requires that a classifier that discriminates between x y pairs coming from pxy and x y pair coming from pxpy this is done via reshuffling the reviewers point out that the procedure is interesting but it does not perform significantly better than the other proposed methods also i want to add that the proposed method is trained by using a given nn trained with 20 epochs and a minibatch of 64 this is a significant issue because if we train the nn to reduce the validation error the posterior probability estimates are typically overconfident a significant work is being done to calibrate them why 20 how do we select this number if we cannot use a validation set with less training example does 20 also work this is very relevant because in the areas in which pxypxpy is low for very high mi values getting these estimates correctly is critical the classifier does not need to perform accurately in classification but an estimation of the posterior probability and nns will tend to be overconfident here and provide a biased estimate for these values it will also provide an overestimate probability in the area that both pxy and pxpy are high finally the authors reference the paper by nguyen wainwright and jordan but they do not acknowledge how that paper actually estimates logpxypxpy similarly that paper is very general and theoretical and this paper can only be understood as a particular implementation of their solution i think the authors missed that point in their paper also i think the authors should acknowledge the papers that have come before using nearest neighbor or histograms for entropy estimation
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this paper proposes to use categorical grammars cg to model learned protocols in emergent communication inspired by work on ccgs for natural language they use cgi to learn a lambda calculus that can model the emergent language from there they propose to use two metrics of the learned cg as metrics of emergent language compositionality f1score of the grammar on a heldout test set cgf and size of the cg lexicon cgl the idea is that if the cg better captures the learned protocol as shown on the test set then it will likely be a compositional protocol cgf and a protocol that decomposes into fewer lexical items will be more compositional cgl to measure the quality of their metrics they use lstms to learn to reconstruct two types of input spaces langattval which is composed of actiondirectionnumber eg lookright2 and langconj which is can combine two langattval statements with an add between them the authors compare the learned languages to likely less compositional languages adjswap12 by swapping 1 or 2 tokens in the learned protocols they find that on langattval the metrics do not distinguish between the less compsitional protocols in contrast on langconj the metrics clearly show the unswapped language to be more compositional furthermore the metrics correlate with topsim providing another argument for their use review overall i believe the paper is interesting and very novel to my knowledge no one has attempted using cgs to model emergent language indeed most of the current metrics for compositionality in ec do not measure nontrivial compositionality so it is good to see more people investigate the complex ways meaning may be transmitted furthermore the paper is well written and provides a great intro to cgs and overview of cgi in the appendix i also appreciated detailing the experimental hyperparameters and showing std deviations in the graphs i believe this paper will make for excellent discussion so i recommend it to be accepted the following comments are mainly for the authors so that they may improve their work for future submission and perhaps give them ideas for the discussions theyd like to have i think the major challenge with this work is that measuring the efficacy of a compositionality metric is difficult because it requires having protocols that are less or more compositional in tre andreas shows a relationship between his metric and mutual information human subjective opinion topsim and systematic generalization in a work from last years workshop measuring nontrivial compositionality in emergent communication korbak et al create specific languages with common pitfalls and then demonstrate how different metrics catch different pitfalls this paper learns a language and then uses adjswap to construct languages that are likely less compositional the issue is that those languages may be worse in many other ways as well so it isnt clear that compositionality is the exact thing your metric is measuring instead i would suggest following andreas and learning languages on a dataset and seeing which ones generalize systematically to a test set it is likely that just changing the random seed will lead to vastly different generalization outcomes and comparing to systematic generalization would be a stronger argument than the adjswap heuristic alternatively you could also specifically learncreate a protocol where tre could not easily capture the compositionality and demonstrate your metric works better the other big challenge for this work is the specific metrics themselves the idea behind f1score and lexicon size is reasonable but they require that the learned cg is a good representation of the protocol it becomes an issue that perhaps the reason behind a large lexicon or bad f1score isnt the compositionality of the emergent protocol but the quality of the learned cg you could demonstrate the cg accuracy correlates with the ec game accuracy which could help overall it is a difficult thing to show because although we know natural language can be generated by something like a lambda calculus it isnt clear that this is the sort of thing that lstms are outputting and so it isnt clear that cgi is accurately capturing the meaning a qualitative analysis of the learned lexicon something like interpretability would be a big step towards showing this i would also like to point out the idea of using a cg for emergent language is quite clever and there are many other possible research ideas stemming from this for example you could learn a cg and use it to replace the senders message then retrain the receiver and see if the resulting protocol is even better another idea is to use the cg as a loss function to guide learning a more compositional protocol as mentioned in future work situated ccgs would be incredibly interesting to see in a gridworld minor comments for lstms with attention please also cite bahdanau et al 2014 comparing to tre feels like a stronger baseline than topsim although ideally you have bothdocsep summary the paper propose to exmine the ccg grammar induced from the emergent language as a probe to measure the underlying linguistic structure like compositionality the authors conduct experiments on the classifical signalling game with a seq2seq lstm model they show that the proposed metric has some correlation with existing metric like toposim and offer extra benefits strengths the introduction of an automatic grammar induction algorithm is a novel idea to me besides the grammar tree depths i imagine there could be other interesting metrics around the induced grammar tree weakness i would love to see more experiments on the proposed metric to start with is it sensitive to the grammar induction optimization process one reason people use toposim is that its simple and stable to compute secondly there are some known algorithm that can improve toposim in the classical signalling game eg neural iterated learning 1 so it should be more interesting to plot this metric along side the toposim with iterated learning final overally i enjoy reading this paper and i like the idea while i think the experiment can be made better with the analysis suggested above 1 httpsopenreviewnetforumidhkepnpvkpb ### Summary:
this paper takes on the difficult task of a new metric for compositionality both reviewers found the idea of categorical grammar novel and interesting and would like to see it pursued further we accept this paper and look forward to discussions on this topic and future work
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper describes a method for making user data unusable for training machine learning models it focuses primarily on image data the basic idea is to use errorminimizing noise in this paper the author propose adding imperceptible to users errorminimizing noise that would make training data unusable for training the authors proposed 2 methods for generating the noise samplewise and classwise this paper is well written the code and the datasets used for the experimentation have been provided overall i would recommend accepting this papers my only concern is with the effectiveness of the proposed technique given what authors discussed in the appendix see questions below the method was used on standard openly available image datasets the results showed that when close to 100 of training samples have been updated with the errorminimizing noise the model performance went down considerably as desired however when even 20 of training data was left clean model performance remained good questions from the appendix notes it appears that adversarial training can significantly negate the effect of adding errorminimizing noise the resulted model performance would be degraded when compared to model training using clean training data only but considering the fact that authors themselves acknowledged that user data with errorminimizing noise may just constitute a fraction of all the training data available for training a model the effectiveness of this technique may be limited due to the effectiveness of adversarial training and the outsized influence of a relatively small number of clean data samples on model performance can the authors discuss the issues with the effectiveness of their presented technique mostly cosmetic page 5 section 41 title errormaximizing written twicedocsepsummary the authors studied the problem of data protection from a new perspective they proposed one kind of errorminimizing noise to make the data added noise unlearnable the noise is imperceptible to human eyes and thus does not affect normal data utility the idea is very interesting and inspiring the authors conducted a series of solid experiments to validate the effectiveness of the proposed noise and tested it on a real world task of face recognition pros 1 the idea of the paper is very interesting its motivation is intuitive and well explained considering adversarial training is to find the worst case example to make the training process robust the authors proposed an opposite direction to find the easiest case to make the training process to learn nothing the authors also proposed two types of noise classwise and samplewise which is a complete formulation 2 the paper revealed an important problem to protect privacy and proposed a simple yet effective method to prevent our data from unauthorized exploitation for training commercial models i think it will attract a broad audience in the iclr community 3 the experiments are solid and comprehensive considering the difference to random and error maximizing noises effectiveness on different datasets and model architectures the detailed stability and transferability analysis convince me why and how error minimizing noise works besides they also show a realworld face recognition task to demonstrate its usefulness in practice cons 1 what is the overhead of generating and adding this kind of noise the author did not mention it in the paper 2 revisiting figure 1 i am curious to know why the samplewise and classwise noises perform so differently especially for random and errormaximizing noise 3 what is the difference between the proposed noise and the data poisoning methodsdocsepsummary the authors proposed the idea of using invisible noise to make personal data unusable to authorized deep learning models to achieve this goal the authors proposed the idea of errorminimizing noise crafted by a minmin optimization method the errorminimizing noise is then added to training examples to make them unlearnable to deep learning models the idea is very well motivated and explained the experiments not only confirm the exceptional effectiveness of the proposed method but also show its flexibility pros 1 the paper is very well written and easy to read 2 i find the idea is very attractive and could have a significant social impact especially considering the fact that personal data has already been overused without consent to train not just commercial but also illegitimate models to fake information or track peoples identity 3 the idea of using the errorminimizing noise is well explained and the generation method is well formulated 4 the experiments are very thorough providing not only evidence of the superb effectiveness of the proposed noise over random or adversarial noise but also the flexibilities and limitations of the proposed method the realworld experiment makes the proposed idea even more convincing although it is just a simple simulated scenario 5 it seems that classwise noise can easily break a classification model which is somewhat interesting from the data protection perspective cons 1 i think the classwise noise breaks the iid assumption of machine learning it seems that breaking the essential assumptions in machine learning can break the model although this is not new however it turns out to be very interesting if used for data protection or similar ideas the authors could have more discussions on this point for example what would happen if someone always used a different background may be invisible for each of the photos uploaded to social media always shifting the newly collected test data to a different distribution can this serve as the same purpose 2 the proposed noise seems not strong against adversarial training although adversarial training is costly and decreasing performance at this moment they may be improved in the future a discussion on the possible ways to generate the noise against adversarial training can be useful 3 how the proposed method is related to backdoor attacks it acts as a type of backdoor attack yes backdoor attacks do not decrease the models performance on clean data i think the clean data in the proposed setting should be the poisoned data rather than the ground truth clean data since both the training and testing data will be collected at the same time i guess the only difference is that in this protection setting the defender cannot do anything about it unless recollecting or denoising the data even if the defender finds the model is poisoned i suggest the authors include more discussions around this pointdocsep textbfcomments the papers motivation is based on protecting private data and preventing its being scraped and used to train models even though motivation is clear and very important the problem is the same as the works in crafting adversarial samples ie the ones under data poisoning and adversarial attacks parts of the related work the key difference is to apply projected gradient descent mandry et al 2018 in the reverse direction iteratively to minimize the loss function furthermore the performance evaluation will be the margin between models trained on completely clean data and samplewiseclasswise adversarially corrupted data in contrast to fooling a pretrained network in adversarial attack benchmarks bullet percentage of noisy training data in the assumptions on defenders capability paragraph the assumption is that only a part of the training set could be perturbed the margin between error maximization and minimization on cifar10 is remarkable figure 1 and this figure is misleading 100 of the training data was perturbed besides table 2 gives accuracy in different ratios of noisy training samples to understand whether perturbed training samples contribute to learning or not i would compare them with clean training for instance in addition to the results of 20 perturbed training setting in deltas and deltac training with only 80 of clean data without perturbed samples bullet comparison to pgd mandry et al 2018 even under the classwise perturbation the noisy training data is learnable in a samplewise setting errormaximizing noise is still learnable and performs very well however it performs around 20 and similar to errorminimization in a classwise setting figure 1 if i am not wrong projected gradient descent as proposed and applied in mandry et al 2018 figure 1 right side reduces the performance the same as the proposed error minimization approach and there is no performance gain bullet generalization to different adversarial attack methods error minimization is shown using pdg only there are several adversarial attack benchmarks on cifar10 and imagenet such as cleverhans foolbox or realsafe considering different evaluation protocols adopting these benchmarks for evaluation is a reasonable option to eliminate other factors is error minimization limited to only pdg or other methods did you try the effect of error minimization using any other method httpsarxivorgabs170704131 httpsarxivorgabs161000768 httpsarxivorgabs191211852 bullet different sourcetarget models in all experiments the source model is resnet18 the classification models used in performance evaluation are resnet18 resnet50 and densenet121 all three models are based on residual blocks in practice we cannot assume the architecture that will be used by thirdparties did you try completely different target models such as alexnet vgg inceptionv3 etc bullet application to face analysis face recognition experiment is nonstandard i strongly recommend applying a standard dataset evaluation that would make comparisons possible both source and target sets are the datasets subsets and the selected identities might show visual dissimilarities ie ethnicity age gender you can report the full performance on the entire target dataset webface furthermore face recognition models are trained as a recognition problem with classification losses or metric learning but tested in face verification settings calculating the distance to query samples reporting the distribution of these distances for instance cumulative matching characteristic cmc and receiver operating characteristic roc would be more informative ### Summary:
the paper proposed a novel methodology for protecting personal data from unauthorized exploitation for training commercial models the proposal is conceptually intuitive and technically motivated it goes to the opposite direction of adversarial training by adding certain errorminimizing noise rather than errormaximizing noise to the data the model is fooled and believes there is nothing to learn from the data and thus this can protect the data from being used for training the paper is of not only high quality but also broad interest given the current social concerns about personal data privacy i think its potential impact should get it a spotlight presentation
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: docsepthe authors introduce cell2state an algorithm that incorporates both genetic barcoding coupled with singlecell sequenced data to model explicit state transitions of cell dynamics over time singlecell gene expression profiles are mapped to lowdimensional state vectors that are predictive of cell dynamics cell2state is evaluated using barcoded stem cell dataset biddy et al 2018 and simulation studies are also presented the model demonstrates better results for cell state prediction finding dynamically stable clusters and reveals potential latent metastates of the underlying cellular evolution process strength the paper deals with a very relevant and challenging problem in biology that of lineage tracing along with states weakness paper is very hard to read there is no consistency to notation and variables used i worry that the main claims might be incorrect but tidying up the notation might help alleviate some of these concerns page 2 1st paragraph shouldnt the lossless encoding of states be iphixphix instead of iphixx figure 1 has variables x and lambda what are these also in this figure label an example of xt and xt1 what is definition 1 stating what is the growth rate and what is y shouldnt expnxt be expall n descendants of xtxt definition 2 states p but uses f in the equation the p in definition 1 is not the same as the p in assumption 1 please maintain consistency of variables used can 1 cell map to more than one latent meta states last line of page 3 phixt is the lowdim embedding of xt and not xt1 check notational consistency section 31 what is pi section 31 what is the lifting of dimensions the data is reduced using pca then lifted to highdim using another gaussian kernel would this not add too many and irrelevant dimensions to an already noisy dataset section 31 how do you define the function space h section 32 what is x section 32 what is phat it is worth dedicating more explanation to the cell2state algorithm the tetrahedron structures what these mean before section 33 section 5 could go to the supplementary to make space for relevant material in the main paper in page 2 summary 3rd point the authors claim their model would perform with 7 dimensions i could not find details to this in the experiments section docsepthe authors develop a novel approach to learn a low dimensional embedding of transcriptomic state of cells using data from cell barcoding experiments which can capture the single cell rnaseq profiles of cells and their descendants the main contributions of the paper can be summarized as 1 a novel approach to learn latent representation of the transcriptomic state of cells by utilizing knowledge of true cell lineage 2 a mathematical analysis of the distortion of the learned embedding under certain reasonable assumptions 3 experiments on simulated and real data to validate the proposed approach the strengths of the paper according to me are the following 1 the authors propose a novel way to identify the transcriptomic state of cells 2 the proposed approach is based on sound mathematical intuitions 3 contingent on certain assumptions holding more on this in the weakness the approach can be theoretically shown to work reasonably well 4 the experimental validation is quite reasonable the weaknesses of the paper according to me are the following 1 it is entirely unclear to me who the target audience for the paper is it reads to a degree like a paper i would find in a life sciences journal but it also contains aspects of a traditional machine learning paper this leads to a paper that in my humble opinion would be truly appreciated by a very small number of computational biology researchers who are also sufficiently proficient in machine learning however this is largely a problem with the writing of the paper i would suggest the authors either introduce more biological contexts in the paper if their goal is to introduce the machine learning audience to an interdisciplinary problem or focus on the methodological aspects of the paper and introduce the biological details as an application 2 assumption 1 is central to the mathematical formulation used in the paper and is a pretty reasonable assumption in my view however it is quite unacceptable that the reasonableness of the assumption is not discussed in any depth given how central it is to the paper i would suggest citing some papers that explore the assumption in more detail 3 the simulated experiment need to be better motivated currently it reads like the data generation process simply meets the assumption of the cell2state algorithm and hence it works well the authors should at least explain why this is a reasonable approach to generated synthetic data and ideally cite other papers that have generated synthetic data using a similar approach the paper presents an approach to infer cell state from a relatively new type of biological dataset the work is both important in its scope and novel in terms of approach however the paper has some big writing issues in my opinion a lot of the biological context is not appropriately introduced for a non life sciences audience and some justifications are missinglacking docsepkernelbased embedding of barcoded singlecell data that preserves mutual information between barcoded pairs this manuscript proposes a kernelbased embedding technique to map highdimensional singlecell gene expression feature vectors to a lowdimensional space so that information is preserved in pairs of singlecell expression pairs that are measured from barcoded parentdescendant pairs the proposed method uses barcoded singlecell data even though each cell can be measured only once the method builds on the assumption that dividing cells are phenotypically approximately similar and descendants of the same cell lineage reveals dynamics of cell transitions overall the proposed method aims to embed the singlecell pairs so that information between barcoded cell pairs is preserved the kernelbased feature embedding is implemented with a combination of random fourier features and with a more more traditional pcasvd dimension reduction techniques additionally authors provide bounds for distance distortion of state embeddings as well as information loss of the embedding the details of the proofs i did not check in details the main drawbacks of this manuscript include i a nonspecific description of the methods in the main texts which leaves a number of technical aspects partly unclear and ii description of somewhat unconventional analysis results without any comparisons to previous methods it is difficult to grasp the true novelty and practical benefits of the proposed method based on the presented results without any comparisons docsepauthors proposed cell2state that could embed barcoded scrnaseq trajectories into lowdimensional representation authors provided theoretic analysis of the embedding learnt by cell2state and demonstrated that the learnt embedding was almost lossless authors applied this embedding framework on one barcoded scrnaseq dataset biddy et al 2018 and demonstrated the learnt embeddings clearly distinguished different cell states furthermore the learnt embeddings were able to substantially improve various downstream tasks other than identifying cell subpopulation major comments 1 fig 3ik why does the error increase dramatically when gamma kernel width increases over 102 2 based on fig 3d and 3e it is clear that day21 raw data is more helpful to identify cell subpopulation compared with day12 raw data im curious to see the difference between day21 cell2state and day21 raw data embedding also it is not fair to compare cell2state embedding and raw data embedding from single time point day12 or day21 as cell2state utilize raw data from both time point to generate the representation instead authors can first concatenate day12 and day21 data along the dimension of cell just concatenate the expression profile of cell at day12 and the expression profile of corresponding descendant together there will be missing profiles as cells were not perfectly paired and authors can impute the missing profiles based on nearest neighbors with respect to day12 then authors can perform pcakernel pcaumap to generate embedding for day12 based on this joint gene expression profile this serves a simple baseline model as it utilizes the same input as that to cell2state authors can compare day12 cell2state to this joint embedding to see if cell2state generates better embeddings given the same input 3 what does the color scale stand for in fig4b for a better visualization this panel can be rotated 90 degrees as well 4 2000 random fourier features were selected based on algorithm 1 im wondering if this number of random features is sufficient to approximate the kernel is 2000 an optimized number after tuning in the parameter space minor comments 1 the color scale in fig 3a is hard to distinguish the gradient is helpful to illustrate the nature of the data but it is more helpful to distinguish them here in panel a 2 it is hard to make sense of fig 3b it is more clear to just illustrate the general trend of cell transition instead of drawing all individual connections here 3 it is better to explain the 366g abbreviation in the caption of fig 3 or in the corresponding section section 422 although authors demonstrated that cell2state embedding is lossless and helpful to achieve a superior performance combined with a simple linear classification model authors didnt directly compare cell2state with any other compatible embedding methods this makes it hard to evaluate the novelty of the proposed embedding framework authors mentioned state representation learning and other related fields in the background section i would suggest authors to pick up similar embedding methods as a baseline model for comparison ### Summary:
while the problem tackled in this paper is interesting there is a consensus among reviewers that the writing of the paper does not allow the reader to fully understand the method developed nor the biological context and results obtained by the method we encourage the authors to take into account the reviewers comments to prepare a future improved version of the manuscript
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: based on a dynamic system perspective this paper characterizes the convergence of gradient penalized wasserstein gan the analytic framework is similar to the one used in nagarajan kolter but requires very heavy machinery to handle measure valued differentiation overall the math seems solid but i have a few questions about the motivation and assumption 1 to my limited knowledge it seems that the twotimescale framework 1 handles both batch and stochastic settings well also from a dynamic system perspective i am wondering why not follow their path since under their framework adding a gradient penalty does not introduce all the technical difficulty in this paper 2 the main theorem characterizes the stability or convergence but does not characterize the advantage of gradient penalty does it make the system more stable at least more technical discussion around the theorem is needed 3 besides the technicality of handling the support of the measure what is new beyond the analysis of nagarajan kolter 1 gans trained by a two timescale update rule converge to a local nash equilibrium by martin heusel hubert ramsauer thomas unterthiner bernhard nessler sepp hochreiter i may be missing something and would like to see the authors response after rebuttal i have carefully read the authors response i appreciate the explanation after reading 1 in detail my conclusion is still that 1 seems to be a stronger framework than the current one and easily extends to the setting with gradient penalty compared with nagarajan and kolter the contribution of this paper seems to be minor although technically involved i have checked the updated pdf but havent found the authors rigorous more stable argumentdocsepthis paper shows that an ideal equilibrium point of a sgpwgan is stable it makes several assumptions that while clear why they are needed in the proof is unjustified in practice the authors should elaborate on these assumptions and comment on why they are reasonable assumptions 1 and 3 essentially say that there is a tube both in sample space and in parameter space around the true data generating distribution in which the discriminator cannot distinguish this seems a strong restriction to the effect of the discriminator is weak for example assumption 1 says if given a sample slightly off the data manifold it still cannot distinguish at all a more reasonable assumption is the ability of the discriminator decays gracefully as samples approach the data manifold assumption 2 is also unjustified its main effect seems to be to eliminate a few terms in the projected jacobian in the proof but its relevance and whether it is reasonable in practice is entirely unmentioned finally it is unclear why this notion of measure valued differentiation is needed first differentiation in measure spaces is no different from differentiation in other infinite dimensional functions spaces the usual notions of gateaux and frechet differentiability apply second the derivatives in questions are not true measurederivatives in the sense that the argument to the function being differentiated is not a measure it is a finite dimensional parameter in the end this seems essentially a derivative of a multivariate functiondocsepin the paper wgan with a squared zero centered gradient penalty term wrt to a general measure is studied under strong assumptions local stability of a timecontinuous gradient ascentdescent dynamical system near an equilibrium point are proven for the new gp term experiments show comparable results to the original wgangp formulation wrt fid and inception score overall i vote for rejecting the paper due to the following reasons the proven convergence theorem is for a timecontinuous fullbatch dynamical system which is very far from what happens in practice stochastic time discrete optimization with momentum etc i dont believe that one can make any conclusions about what is actually happening for gans from such an idealized setting overall i dont understand why i should care about local stability of that dynamical system given the previous point i feel the authors draw too strong conclusions from their results i dont think theorem 1 gives too many insights about the success of gradient penalty terms there are only marginal improvements in practice over wgangp when using other penalty measures further remarks in the introduction it is claimed that mode collapse is due to js divergence and lowdimensionality of the data manifold this is just a conjecture and the statement should be made more weak the preliminaries on measure theory are unnecessarily complicated eg partly developed in general metric spaces i suggest that the authors try to simplify the presentation for the considered case of rn and avoid unnecessarily complicated mathy definitions as they distract from the actual results after rebuttal after reading the authors rebuttal i increased the my rating to 6 as they addressed some of my doubts i still think that the studied setting is too idealized but it is a first step towards an analysis ### Summary:
all three reviewers expressed concerns about the assumptions made for the local stability analysis the ac thus recommends revise and resubmit
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a transfer learning strategy for improving compositional generalization of semantic parsers based on pretrained language models before finetuning the model on data from the target domain the authors propose a prefinetuning step where models are trained on compositional splits of data from another source domain with the goal to transfer the models learned knowledge about language compositionality during this prefinetuning step to the final learning stage on the target domain therefore improving compositional generalization to this end the authors propose a prefinetuning method which encourages the model to discover representations of natural language that are invariant against its compositional structures this is achieved by iteratively freezing the encoder or decoder modules during prefinetuning and training the encoder and decoder modules on compositionally disjoint splits of the source data such that the encoder learns representations that are robust against distributional shift of language compositionality while a like this idea there are several issues with the proposal approach 1 comparison with stateoftheart there is little information in section 5 about comparison with existing approaches in compositional generalization for semantic parsing indeed in recent years several seminal works have emerged pushing accuracies on some synthetic tasks like scan to near 100 accuracy these works are not mentioned in section 5 while the model outperformed the currently best approach shaw et al 2021 on geotmcd the lower results on other simpler tasks like scan make me feel a bit concerned about the results perhaps this is because only few models are evaluated on geotmcd so far and previous approaches more tailored to the contextfree utterances on geo eg herzig and berant would actually perform better 2 methodology another issue is related to the proposed approach itself while prefinetuning encoder on compositional split a and the decoder on compositional split b could encourage the encoder to learn representations that generalize better to split b the generalization strategy learned by the encoder might be specific to split b only and might not be able to generalize to other compositionally novel distributions eg the final evaluation data ideally during the prefinetuning stage the model need to learn to generalize well to arbitrary mismatched splits but only presenting one set of splits ab might not be enough for the model to learn a more generalpurpose strategy 3 transferability across datasets transfer learning on nlp tasks would require the source and target domains share reasonably amount of common language patterns in order to perform well however the tasks used in this paper have drastically different utterances in language styles and compositional patterns which makes transfer learning quite nontrivial for example scan only contains toyish words like jump and simple composition strategies like concatenation jump twice it would be doubtable if learning to generalize well on this toyish domain would be useful for handling realworld utterances with diverse language style like geo the authors could present more analysis in terms of the language and compositionality styles of those datasets in order to have a better understanding of the upperbound performance of transfer learning approaches for compositional generalization this paper presents a nice idea for improving compositional generalization of neural semantic parsers the results on geotmcd outperforms the currently best approach however there are issues with both experimentation and the methodology docsepthis paper is focused on the problem of compositional generalization in semantic parsing and introduces a method called duel which involves prefinetuning iteratively on compositional traintest splits from other datasets before transferring to finetuning on the training data from the target dataset the method involves using the compositional traintest split from one dataset and training their encoderdecoder model iteratively such that the encoder parameters are updated based on the test data from that dataset and the decoder parameters are updated based on the training data from that dataset after this prefinetuning the model is finetuned on the training data from the target dataset they find that their model outperforms baselines involving 1 finetuning on the target task only and 2 prefinetuning on the merged data from the other dataset without the encoderdecoder split they find that their method largely does not help with the extremely low numbers on cogs structural items but the margins of improvement are larger for geoquery data and scan data with the authors claiming a new sota result on one of the splits for geoquery overall i thought that this was an interesting paper which was mostly clearly written and organized and generally i liked the method that was introduced im leaning toward acceptance and i can imagine the concerns below being addressed satisfactorily and further increasing my confidence i had a few questions and potential concerns that weaken my confidence in the impact of the contribution the first question involves the reasoning behind the particular design of the method the authors lay out a rationale for training the encoder parameters on the test component of each split and the decoder on the train component of each split but the reasoning given is not terribly transparent to me and i was left wondering whether similar results could be achieved by instead training the encoder on the train component and the decoder on the test component was this something that the authors tried i think that including this comparison could be informative with respect to the importance of setting up the method in this specific way another confusion i had involved the original purpose of these various datasets and how this related to the current usage for instance not being familiar with geoquery the description in the paper led me to believe that it was designed as a qa dataset so i was wondering why any sota would exist for semantic parsing on a qa dataset a google search suggests that geoquery is in fact annotated with semantic parses but this confusion could be alleviated by making clearer that the dataset is used for semantic parsing i was similarly wondering about use of scan for semantic parsing since my understanding was that scan was designed for mapping commands to actions if this is correct where are the semantic parses coming from for that dataset i imagine that since it is synthetictemplatebased producing semantic parses in a rulebased manner may be straightforward but it wasnt clear to me from the paper how this was working my additional concerns are focused more on the impact of the contribution the baselines that the paper compares against are for the most part not external models rather the authors are comparing only against baseline versions of their own model without the key components of the new method so the improvement over the baselines indicates that the method does improve over the same model without the iterative compositionalsplit training however it is only in the one case of the geoquery dataset that the authors mention the existing sota which they have beaten suggesting that there are stronger sota models on the other datasets or at least on cogs if scan is not used typically for semantic parsing what this leads me to believe is that while the method improves over a vanilla model it may not be improving over stronger models that use alternative methods for cogs and possibly scan im curious especially whether other models have made better headway on the cogs structural test which is showing especially low performance here it would be helpful to get greater clarity on how the presented results relate to the strongest existing results from other models across all datasets finally im not totally sure how surprisedimpressed we should be by improvements from this method specifically im wondering how impressed we should be that we see a performance boost from training models to generalize across a specific type of split eg one in which the test sentences are longer than the training sentences for exactly the target task semantic parsing the authors make this general observation in section 53 when they acknowledge that the model works best when the compositional splits are maximally similar so to what extent is it ultimately somewhat obvious that training models to handle a given type of split will help it on this type of split to put the above concern another way to what extent are we potentially no longer testing models on compositional generalization if we train them directly to be able to generalize in the particular way that is needed for the selected compositional split if the performance boost is very specific to a particular relationship between test and training data is this simply allowing the model to learn strategies specific to that particular type of generalization such that it no longer needs to use composition to achieve that generalization this would defeat the purpose of trying to improve models ability to use actual compositional processing to show the desired generalization so i would like to see the authors address this concern in general i found the method interesting and the paper overall clear however i have some remaining questions about certain aspects of the method and datasetstasks as well as some potential concerns about the impact of the contribution which i would like to see addressed before i can strongly endorse this paper docsepthis paper proposes a training procedure for encoderdecoder models applied to semantic parsing which aims to improve the models ability to compositionally generalize successfully handle novel combinations of words and structures where combinations were not seen in training the approach relies on prefinetuning training a model on a different dataset than the target dataset that also the requires the same sort of compositional generalization as the target dataset before then training on the training set of the target dataset and then evaluating zeroshot on the compositional set of the target dataset in the standard way in prefinetuning the decoder is only updated on the training set while the encoder is updated on the compositional generalization set the approach is evaluated using two different pretrained encoderdecoder transformer architectures on three different semantic parsing compositional generalization datasets from past work where it obtains consistent albiet somewhat small improvements over a baseline that prefinetunes all model parameters and outperforms a past stateoftheart model on one dataset strengths s1 i appreciated the papers use of a nonsynthetic dataset geoquery as i feel that this is an underexplored area of work on compositional generalization which will be useful to explore how and when nets fail to generalize on real data and how to fix them s2 the proposed approach seems simple and easy to implement s3 the experiments were overall thorough at least in the scope of semantic parsing compositional generalization evaluating on three different datasets and two different models s4 i found the demonstration of the benefits of prefinetuning interesting and convincing s5 the paper was extremely clearly written in particular the description of the method weaknesses w1 i wasnt totally convinced that the method works well on strong tests of compositional generalization the geocd and scancd splits although they follow past work are defined using a compound divergence method that as the paper points out does not ensure that compounds are completely absent but only infrequent in the training set the cogscg lexical challenge seems to mostly be obviated by pretrained representations while i did find the length generalization results to be a convincing improvement and a more reasonable test of structural generalization no method really seems to help much on the harder structural generalization test of cogscg and concerningly even the proposed method makes no improvement in what seems to be the most a priori favorable experiment design for it described in 54 although i did appreciate including this negative result w2 its not totally clear to me why the method should enable compositional generalization in general and i feel like it would help to strengthen the motivation and intuition for the method or perhaps do some some analysis could be done to indicate why its working where it is the paper motivates duel as learning to represent input sequences in a way that facilities compositional generalization but its not totally clear to me how the alternating freezing does this it seems like the metalearning approach which directly trains for compositional generalization that other work has employed is more directly suited to this or even perhaps some adversarial training approach if the paper is aiming to learn representations fx that encode invariances across s and s it would help if the paper could somehow characterize the representations or models that duel learns eg providing something like a fixedpoint analysis which the algorithm has converged to the desired representation since the difference in representing s and s is small in section 4 starts to do but wasnt totally clear to me or alternatively perhaps the paper could do some empirical investigation of distributional differences in fx when x is drawn from s versus from s w3 some of the choices in the design of the method felt a bit arbitrary and if they were better justified id feel more confident that the approach is working for understandable reasons why reinitialize the parameters of g in finetuning the prefinetuning setup on p does not match the finetuning setup on q in that finetuning updates both f and g why not update f in training on p or keep f fixed in training on q while in some compositional generalization tasks s is intuitively harder eg longer inputs and outputs than s in some way in other tasks s and s are interchangeable so does it matter that g is updated on s and f on s and that the last updates in prefinetuning are always done on s w4 it would help to present statistical significance results or standard deviations across multiple seeds as its a bit difficult to interpret the significance of the improvements however since the improvements are consistent albeit small i dont think this is a crucial weakness minor comments it would help to give some intuition for the alpha in section 3 how is this value chosen the update equations 13 with a single stepsize make it seem like sgd is being used but from the appendix its adam how are the logical forms updated in the cogsvar splits to match the changes to the input sentences typos pg 4 standard supervise learning standard supervised learning pg 5 several grammatical errors at the end of section 4 pg 6 bertsmall bertbase pg 6 geotmcd2 geocd pg 7 when will duel works best when will duel work best pg 7 compsitional compositional pg 7 duel helps extracting duel helps extract i feel a bit borderline about this paper as the method seems a bit limited and heuristic not being clearly designed for compositional generalization or showing convincing results on the hardest tests of compositional generalization but it does seem to show consistent if sometimes small improvements on a couple models and several datasets the methodology seems sound and the paper is very clearly written ive put an overall score of 5 for now but i look forward to discussion update after the response thanks to the authors for their thorough response to my comments the explanations and new ablation results helped convince me that the choices made in designing the method were reasonable i also appreciate the standard deviations which make me confident that the improvements are real im in favor of accepting this paper and have updated my score to a 6 from a 5 ### Summary:
the authors attempt to tackle the problem of compositional generalization ie the problem of generalizing to novel combinations of familiar words or structures the authors propose a transfer learning strategy based on pretraining language models the idea is to introduce a prefinetuning task where a model is first trained on compositional traintest splits from other datasets before transferring to finetuning on the training data from the target dataset although the technique brings some improvements and the authors do their best the address the reviewers questions it is still unclear a why the method should work in principle whether there is a theoretical backing and how it formally relates to metalearning b how the approach compares to data augmentation methods since prefinetuning requires more data albeit from a different dataset see for example httpsopenreviewnetforumidps3imnscugk c the whole approach would be more convincing if the authors could articulate how their method renders a model more robust to distribution shifts eg based on gogs results it does not help structural generalization do the gains come from lexical generalization d it would also be interesting whether this method works on larger scale or more realistic datsets like cfq atis or machine translation httpsarxivorgpdf191209713pdf httpsarxivorgabs201011818
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a conservative smoothing technique by adding perturbation to the states to improve the robustness of learned policy on offline rl theoretically they claim their work enjoys a tighter suboptimality bound in linear mdps strengths prior offline rl methods mainly focus on the ood actions instead of states this paper targets the robustness issue for q value via perturbing the states this paper is wellwritten and easy to follow weaknesses the algorithm employs the ensemble q functions it is well known that the ensemble technique can bring robustness to learning however it is unclear whether or not the ensemble technique brings the main contribution to the robustness to this end comparing single agent methods such as cql and bc is unfair moreover on the results of the experiments the proposed rorl is also very close to the ensemble baseline edac yes docsepthe paper investigates the problem of training robust rl agents with offline datasets the paper claims that regularizing policy and value networks to have similar values against adversarial perturbations and applying this technique to pbrl can achieve stateoftheart performance on the d4rl benchmarks in both standard and adversarial settings the paper also provides a theoretical analysis of the suboptimality gap of the proposed algorithm in linear mdps strengths the paper thoroughly analyses the proposed algorithm both empirically and theoretically the proposed method achieves stateoftheart performance in the standard offline rl benchmark while the performance improvement is quite marginal weaknesses the paper does not provide experimental results on expert or nearexpert datasets the proposed algorithm requires computing an adversarial example hats which is computationally expensive i think the proposed method is much slower than other offline rl algorithms since it has to solve minimax problems please report the wallclock time of the proposed method in the main text docsepthe authors propose an approach to offline reinforcement learning that is robust to small perturbations in the observation space such that the changes are not detrimental to the performance of the final policy the achieve this by encouraging the value estimator network to be smooth over the state space while being conservative on out of distribution samples moreover the learned policy is also constrained to change less with these perturbations experimental results show that the proposed algorithm is able to perform competitively with current robust baseline approaches and enjoy increased robustness over adversarial attacks strengths s1 overall the motivations for the approach are intuitive and easy to follow the paper is well written and clear excluding some minor comments in the questions section s2 the proposed attack metrics are diverse and well defined under the current framework the experimental results show that rorl is more robust towards the proposed attacks as compared to other methods while being able to perform competitively under normal conditions s3 theoretically the proposed framework rorl enjoys a tighter suboptimality bound that pbrl weakness c1 my main complain towards the paper is that while the authors specifically tackle the tradeoff of conservatism and robustness in offline rl no clear metric has been defined to quantify the robustness of an approach hence the reader is forced to trust their eyes over the evaluation curves to judge the robustness of the approach it may be insightful to invest some thoughts on quantifying the robustness by for example measuring the area under the curve on the performance under attack curve where the scale is normalized over the overall variation and dimensionality of the given dataset c2 while this might come off as a knee jerk comment it would be interesting to see the same set of experiments on more challenging benchmark such as antmaze especially as the proposed approach is claiming to better generalization ability and improve the overall robustness of the policy will the generalization ability result in significant improvements in a more challenging domain while the authors clearly mention the adversarial state sampling as their main limitation they do not quantify this slow down for different state sampling approaches it would be interesting to see the actual effect on the compute time as a percentage of the total training time docsepthe authors handled my major concerns on approximation and experiments by providing additional responses and adding more experiments id like to improve my score as boarderline accept training robust rl agent from offline datasets is an important yet challenging problem this paper proposes rorl offline rl algorithms with conservative smoothing the main idea is to add smooth constraints forcing agent to generate outputs on adversarially perturbed inputs to offline rl algorithm here to avoid overestimation issues the authors also utilize uncertainties from qfunction as a penalty term the proposed method not only achieved strong results on mujoco tasks from d4rl datasets but also showed that learned agents are more robust to perturbation strength motivation is clear and the proposed method sounds reasonable weakness lack of evaluation on challenging domains even though mujoco tasks are standard benchmark in offline rl it would be nice if the authors can evaluate on more challenging tasks such as antmaze or atari also it would be nice if the author can consider the combination with more stateoftheart offline rl algorithm 1 overall i think this paper studies an important research question and proposes a reasonable solution also the authors showed the gains from the proposed method very clearly on standard offline rl benchmarks however at the same time there are several concerns ie more evaluation on challenging tasks combination with more statedoftheart offline rl algorithms and so on about the draft because of that id like to suggest weak reject but im also willing to change my score based on other reviews and author responses 1 kostrikov i nair a and levine s 2021 offline reinforcement learning with implicit qlearning arxiv preprint arxiv211006169 as pointed out in section 8 the overhead induced by the proposed method can slow down the training even though this can be handled later it would be nice if the authors also can clarify the training overhead from the proposed method eg comparing training time of rorl and other offline rl algorithms ### Summary:
all reviewers agree that the authors response has addressed their primary concerns reviewer frmm had two reservations that resulted in a borderline rating 1 concerns about how the adversarial samples were generated and 2 a request for evaluation on antmaze the authors followup response and further experiments address 1 and partially 2 it would be great to see rorl results on antmaze in the final version overall the performance of rorl is competitive with stateoftheart methods on mujoco and adroit tasks with fewer ensemble elements needed the main benefit is on improved performance against adversarial attack where rorl significantly improves over existing methods i think the paper makes a nice contribution that the community will find valuable i encourage the authors to think carefully about how to integrate the additional experiments into the paper to resolve the questions raised by reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes dnn quantization with attention dqa which uses a learnable linear combination of high medium and lowbit quantization at the beginning it gradually converges to a single lowbit quantization at the end of training experiments show that dqa outperforms the naive quantization and the binaryrelax method consistently across three datasets and two networks strengths 1 according to the experiment doa performs better than the naive quantization strategy and binaryrelax consistently across the experimented datasets and networks weaknesses 1 the presentation needs to be improved there are grammar errors and typos in the paper 2 the paper compares the performance of doa with only one related quantization work binaryrelax it is not sufficient to demonstrate the effectiveness of the proposed work there are many quantization works both quantizationaware training and posttraining quantization it seems that some of them may have better performance in the experimented settings for example the work lqnets reports an accuracy of 68 with 2bit weights and 32 bits activation with resnet18 on imagenet but doa proposed in this paper only achieves an accuracy of 669 minor comments or questions 1 how to decide what quantization method to use eg minmax sawb bwn twn when using daq in practice appendix a only defines each quantization method but doesnt give any guidance on how to choose them 2 for experiments on the imagenet dataset using resnet18 why not report r18br and mv2br results 3 in section 42 the paper mentions all reported validation accuracies are the results of a single training run it might be better to report averaged results across several runs even if the convergence of the networks is not noisy 4 in table 1 it seems that dqa using swab consistently gives better results than the fp version do you have any insight regarding this overall i think the paper is not good enough due to aforementioned nontrivial weaknesses docsepthe papers addresses the problem of compression of neural networks the paper builds upon binaryrelax prior work the precision is adapted during training with a mixturebased quantization method through temperature cooling and a set of an attention vector a the idea of the method called dqa is to progressively moves from a mixture of quantization functions mixing high with low precision for instance 32 bit with 2 bit training to a single one low precision towards the end of the training the paper states that the method can be used with several types of quantization methods the evaluation is carried on computer vision architectures resnet18 mobilenet for image recognition tasks cifar10 cifar100 imagenet ilsvrc 2012 1 in my understanding what the paper call attention is simply a value weighting the importance of the different quantization functions see eqn 23 therefore this terminology is misleading in my opinion as the vector of attention a only depends on the trainable parameter alpha and not does not depend on the input either or activation as one would imagine with this name maybe i misunderstood something but this is what i infer from eqn 4 eqn 23 and figure 1 2 the paper does not consier in the literature review techniques for quantizing neural networks abc that to my knowledge are stateoftheart for quantizing popular neural networks a permute quantize and finetune efficient compression of neural networks martinez et al cvpr2021 b and the bit goes down revisiting the quantization of neural networks stock et al iclr2020 c training with quantization noise for extreme model compression fan et al iclr2021 at a high level some elements are similar to these approaches even though the details may differ one noticeably similarity is the various of precision which is already present in the work by fan et al c when they consider both blocks that are not quantized ie 32 bits versus some that are quantized with lowprecision and randomly choose in my own experience choosing randomly a choice is better that relaxation so i guess the paper should have included such a comparison in any case the paper should be better positioned against the recent literature 3 the paper is compared to poor baselines as a result the paper reports results that do not look competitive the state of the art abc on imagenet ilsvrc 2012 which is the most significant benchmark in the paper for instance stock et al b report some results with r18 figure 3 left table 3 that are as follow compression factors of x20 for top1 accuracy at 6787 in the submitted paper additionally more recent papers a and c are compared to b on larger networks and show that they further improve results therefore i conclude that the proposed approach is not competitive while additionally requiring a more engaged scheduling that may not generalize as well to other training settings the paper states that it could be combined with any compression method therefore in this context it would be worth using the same or similar quantization as in abc and show how the method compares to these methods 4 formally i have nothing against having in the same section the introduction and the related work in the case of this paper i found that the introduction is actually more a related work than a formal introduction providing the motivation and rationale of the paper content at the core of the initial discussion this discussion appears later in the background section while i know well this area and therefore the problem at stake i would advise to rework jointly these two sections 5 the paper mentions that the method could be used for quantization activation but only addresses the case of weight quantization i think that a lot of practical considerations would appear with activation quantization so i would suggest either to support this claim with experiments or to suppress or soften this unsupported claim 6 the paper needs some polishing some mistakes indicate that the paper was not analyzed by a spellchecker for instance in the abstract conterparts counterparts the term bitwidth is not established and occurred at least once with a typo the paper does not demonstrate that the method is a significant contribution to the state of the art in quantizing neural networks the experiments are only applied on image classification task with small architectures and in the setting that i found comparable in the literature the results do not look great which questions the significance of this work docsepthis paper attempt to address a challenging quantization problem ie lowbit quantization this work utilizes a learnable linear combination of high medium and lowbit quantization at the beginning while converging to a single lowbit quantization at the end of the training in the quantization procedure multiple quantizers and the corresponding attention matrices are adopted to fuse the quantized weights or activations pros the paper is well written and the idea is easy to follow extensive ablation studies are provided to evaluate different components of the proposed method cons more parameters are introduced in the training stage such as this will increase the computation and storage cost more theoretical and experimental analysis should be given to study that multiple quantizers with different bitwidth are conducted in the proposed method which will increase the storage and computation cost for quantization in the experiments the authors compare their method with the corresponding counterparts with the same bit of n1 however the proposed method has three quantizers with different bitwidth and n1 is the lowest bitwidth therefore this comparison seems unfair for a fair comparison the baselines and the proposed method should be compared under the same computation and storage cost the proposed method has not been compared with stateoftheart approaches which cannot comprehensively evaluate the proposed method this work utilizes a learnable linear combination of high medium and lowbit quantization at the beginning while converging to a single lowbit quantization at the end of the training in the quantization procedure multiple quantizers and the corresponding attention matrices are adopted to fuse the quantized weights or activations which will increase the computation and storage cost some experiments are conducted to evaluate the proposed method however it lacks some comprehensive and fair comparison docsepthis work presents a training method for lowbit network quantization while training it employs a multibitwidth paradigm in order to alleviate the nonsmooth optimization landscape with lower bitwidth it uses a temperature parameter and a penalty term to force the network to gradually converge to the target low bit experiments are conducted on cifar10 cifar100 imagenet classification with resnet 18 and mobilenetv2 strength authors shows the proposed multibiwidth training effectively reduces quantization error and helps smooth loss landscape with some sample cases and visualizations weaknesses overall performance of the given approach is not satisfying most recent quantization papers mainly conduct experiments on the largescale imagenet dataset since the cifar datasets are prone to easy overfitting almost all papers i know doing lowbit quantization has better results than this one on weightonly 2bit quantization with mobilenetv2 sat 668 neural network quantization with scaleadjusted training bmvc 2020 deepcomp 581 deep compression compressing deep neural networks with pruning trained quantization and huffman coding iclr 2016 this work 522 on both weight and activation 2bit quantization with resnet18 pact 644 pact parameterized clipping activation for quantized neural networks arxiv 18 lqnet 649 lqnets learned quantization for highly accurate and compact deep neural networks eccv 18 sat 655 neural network quantization with scaleadjusted training bmvc 2020 this work 604 some technical details are not clear the authors use a penalty term equation 6 to regularize the attention weights of different bitwdith however it is not known whether all bitwidths in the network will be converged to the lowest bit which is the target if some bitwidths are not property converged will there be any issue on the performance writing needs improvement there are a lot of grammar errors and typos page 2 last paragraph a way how to train a delete how page 3 background 4th paragraph note that because no comma page 3 background 4th paragraph this problem is most pronounced for low bitwidhts typo overall this work presents a new approach to help improve the training convergence of lowbit quantization for neural network due to the weak results on largescale datasets and unclear technical details i do not think it meets our bar at iclr ### Summary:
this paper proposes a new learning procedure for quantizing neural networks basically dqa method proposed in this paper uses attention to obtain a linear combination of the existing network quantization techniques and uses it to pursue more efficient quantization overall it seems the submission was written in haste so there are many typos and errors above all the motivation that it can be applied to various existing techniques could not be proved experimentally at all since it only covers one somehow obsolete work in addition as in 1 it seems necessary to quantize not only weights but also activations or to verify in lightweight networks such as mobilenetv2 rather than resnet 1 clusterpromoting quantization with bitdrop for minimizing network quantization loss iccv 2021
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: new tool for inferring underlying properties of partially observed spiking neural networks generalization of the fully observable solution by rene et al based on mean field modeling on net effect of unobserved neurons validated on within model class simulated data strengths clear setup and motivation novelty interesting attempt at extracting more interpretable latent models with a focus on modeling spiking activity of neurons including unobserved ones nice mix of traditional comp neuro and ml estimation weaknesses writing clarity i would have appreciated a clear spelling out of the graphical model separation between what counts as observations inferred latent variable and model parameters the current text mixes in mean field technicalities which makes it less clear than it should be in particular the link between n and y eq4b backreferring to eq1 was difficult to get from the text but i found both section 3 and 4 meandering and hard to follow in places the interpretability of the parameters may quickly become problematic for out of distribution data in particular when its not clear that the observed neural responses can be easily partitioned in a small set of homogeneous subpopulations especially given the extremely strict notion of homogeneity required here arguably heterogeneity if a key feature of brain circuits yet is not clear how sensitive is the estimation to deviations from the strict homogeneity assumption metastable dynamics deviate by construction from the model assumptions of the most commonly used latent dynamical systems models making them a perhaps unfair choice as the main benchmark for comparison across models simulations are often somewhat anecdotal fig 2de comparison to the fully observed scenario is rather trivial if the model assumptions are radically different from the ground truth it is unreasonable to assume its estimated parameters to match the data minor photostimulation is nonstandard terminology especially if talking to an experimentalist causal manipulation optogenetics etc would prove more useful technical limitations of the applicability of the procedure to real data should be discussed a lot more no ethical issues docsepthe authors propose a new type of latent space model for neural spike trains based on a spiking neural network snn they use mean field approximations to abstract parts of the snn resulting in the latent dynamics but keep the initial formulation for the observed neurons the authors show that after predefining a single or multiple neural clusters they can recover the connectivity values by an em algorithm from snippets of 10 sec activity of only some observed neurons the proposed latent model is able to reproduce some key functionality of the snn and outperforms other latent variable benchmark models strengths the paper is clearly written and the authors define the posed problem nicely the work is technical sound and the experiments are worked out thoroughly the proposed latent model is an interesting approach to bridge the gap between pure statistical models and more biologically interpretable models it has potentials for further investigations and the applications to experimental data however there are several weaknesses weaknesses 1 the authors test their setup only on one parameter set it is not clear that the results are robust to different network configurations 2 the comparison which the authors make between the different models is quite unfair as the ground truth data is generated from the microscopic model the neulvm is based on they share therefor the same inductive biases whereas the other three models could be advantageous on differently generated data 3 a demonstration on real experimental would have been nice but this is potentially beyond the scope of this manuscript 4 there are several open questions see questions especially when it comes to an application to experimental data 5 the model makes quite some assumptions on the network structure number of ei clusters base connectivity pattern etc this is a limitations which the authors should discuss in more detail minor comments fig 2f a log scale for the yaxis could be more appropriate update after considering the responses to this review and the reviews of the other reviewers the score was raised by 1 the authors mention two shortcomings before applying their method to experimental data however there are more open questions and limitations which could be commented on see also questions the authors did not comment on potential negative societal impact of their work docsepin the paper mesoscopic modeling of hidden spiking neurons the authors introduce a neuronallygrounded latent variable model for fitting populations of observed and hidden units the latter are described at the mesoscopic level the model is evaluated on synthetic data with a single homogeneous population and with multiple populations and compared against other competing methods the authors address a very difficult problem the problem of dealing with unobserved populations in spiking neural networks which has plagued snn modeling for decades they propose a very promising approach introducing inductive biases from biology to model hidden populations but still keeping the level of description of these populations coarse therefore significance of this work is excellent clarity of the paper and figures is excellent too the text is very well written and organized focusing on the single hidden population in the main paper and the more general case in the appendix all equations are clear with symbols well explained the work builds on work on mesoscopic modeling of snns by rene et al and extends this to a latent variable model which is trained by the baumviterbi algorithm this is an interesting and novel approach the evaluation of the model shows some of its strengths when compared to plds slds and glms but falls short of convincing me that this would hold in more realistic cases where there is some variability in mostly homogeneous hidden populations see questions and limitations below the authors do not see their method as being ready for application to real data yet cf discussion nevertheless this work is an important stepping stone to deal with unobserved population activity the authors havent made their source code available to my understanding the source code should be available to reviewers the authors did not discuss any societal impact this is appropriate for this work the authors briefly discuss limitations regarding nonidentifiability and potential need for preprocessing however given the seemingly strong assumption of homogeneous hidden populations i would have expected a more thorough discussion of limitations docsepthe paper proposes a latent variable model for modeling observed activity in spiking neural networks while taking into account the activity of unobserved neurons to do this the authors propose a meanfield approximation approach to reduce the effect of all unobserved neurons to a lower dimensional summary quantity with simplified parameters the proposed model neurlvm is fit via the approximate hard em algorithm the model is validated in two simulations where it provides accurate recovery of net population activity and of switching states the fitted reduced neurlvm model was also able to predict the effect of stimulation on the true simulated network the paper is clear and of highquality it provides an original contribution for including the effect of unobserved neurons when fitting spiking neural network models this is a significant step forward in models of neural spiking responses the proposed methods and experiments were clearly described with sufficient detail additionally the model showed good performance in the simulations one weakness is the homogeneous population assumption seems to limit the generality of the method it is not clear how the approach would work for heterogeneous populations with higher dimensional activity nonetheless i think the proposed method is an important step forward update i appreciate the authors additional experiments and responses to my questions in particular i think including the heterogeneous population experiment strengthens the evaluation of the proposed approach based on the additional evaluations and clarity i have increased my score by a point the authors discussed two limitations parameter recovery and choices of parameters and architecture for applications to real data in their conclusion ### Summary:
dear authors congratulations on your paper being accepted the reviewers unanimously recommended acceptance the reviewers made a number of recommendations on how to improve the paper further in particular with respect to clarity of writing and explaining the motivation behind different analyses we strongly encourage you use this feedback to improve the paper if needs be additional clarifications can be added in the supplement in addition it would indeed be highly useful to make your source code publicly available as you indicated in your response best your ac
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this paper proposes a fast general framework fnas for neural architecture search nas problem to enhance the processing efficiency up to 10x times three interesting strategies uac lkp aeb for reinforcement learning rl processing are introduced in the proposed fnas and evaluated by extensive experiments to show their efficacy in particular the assumption that architecture knowledge is transferable has been verified by real observation however the authors paid more attention to introduce the fact based on observations and the thoughts of the framework design thus neglected the technical depth for the key component uac that has highest impact on the overall performance strengths the paper is wellorganized and wellwritten thus easy to understand including motivation approach and experiments the proposed framework fnas is general practical and convincing the three strategies uac lkp aeb for rl processing are based on real observations shown in figure 4 which positively supports the motivation of the approach the evaluations are conducted on extensive experiments with solid results including ablation studies for the three different components lkp uac and aeb although it might be not sufficient see next the weaknesses weaknesses the technical depth was neglected for example the uncertaintyaware critic uac should be considered as the key component of fnas framework because it was shown that the uac has highest impact on biggest contribution to the overall performance in terms of efficiency in table 4 however there is no any technicalmathematical introductions about how the uncertainty network u is obtainedprepared and how the u contribute to the nas in detail discussions regarding possible overfitting are not sufficient for example the constraint threshold delta of uncertainty is introduced into the uac strategy sec 41 but without any experiment results to show the impact of such hyperparameters in the fnas framework that has to be considered as tradeoff parameter against overfitting effects similarly in the aeb strategy sec 5 the buffer size n and the annealing term beta are hyperparameters that should have impact on the overfitting effects during rl processing however the authors did not provide any testing results to confirm their impacts for example why did the authors determine the buffer size n10 in the experiments the testing of fnas on vision tasks are not sufficient this paper provided results on classification imagenet and face recognition tasks but how about other tasks such as object detection tracking person reidentification and segmentation other questionable points the loss functions shown in eq 1 and eq 2 seem too simple are they really sufficient to get high performance is there any potential loss functions or improvements that would get better performance in table 1 why the numbers of gpu hours for mbv3 and efficientnetb0 are not shown it is inconsistent with textual description in sec 62 there is nearly 10x of acceleration that is not able to confirm in table 1 and 2 regarding the numbers of gpu hours like 20000 and 2000 do they indicate the real runtime in the experiments or only estimated values as we know 20000 hours are roughly equal to 23 years and 2000 hours are similarly equal to 28 months the reliability of the experiment results might be doubting in the references there are too many informal publications cited from arxiv instead they should be replaced by their formal publications at the corresponding conferences or journals docsepsummary the paper propose a few improvements to the samplingbased nas using rl 1 an uncertaintyaware critic to decide whether the sample needs to be trained 2 a lifelong knowledge pool to initialize the sample that needs training and 3 an architecture experience buffer to reuse old samples for rl training the experiments are done on imagenet facial recognition and transferability on object detection the proposed methods are compared with related works finally the paper finishes with ablation studies on both the effectiveness and transferability of the proposed modules strengths the paper is wellwritten with clear flow and structures the three proposed modules are novel and improve the search cost significantly while achieving better performance its great to see the authors has done a comprehensive comparison with the related methods for multiple tasks the ablation study also demonstrate the effectiveness of the three proposed modules weakness the improvement of top1 acc on imagenet is marginal without the 13 scale up and worse than some of the recently proposed differentiable nas work which requires far less search cost comparable to darts there could be more discussion on related work studying uncertainty in rl or in supervised learning given that it is one of the core modules in the proposed pipeline and uncertainty an important topic in generaldocsepin this paper the authors propose to use a samplingbased approach to neural architecture search which combines a lifelong knowledge pool uncertainty aware critic architecture experience buffer this approach has been demonstrated with vision tasks involving days of tpu training overall i rank 5 marginally below the acceptance threshold nas is an underexplored topic but the papers seems like an engineering project that combines multiple existing ideas from the others work and there lacks theoretical depth about clear mathematical formulation of the approach and reasoning on why the approach should work pros having access to tpu the nas topic integration of lifelong learning nas and several other ideas cons why lifelong learning should work considering that we are not in multitask learning and nonstationary environment scenario how do we know that the experimental results is not some coincidence can we put down the whole framework mathematically it seems that this paper has only two formulas about some loss functions can we reason about the math for example any ideas to better organize knowledge pool and ideas architecture experience buffer for a large number of architectures and parameters encountered docsepthis paper proposes an rlbased neural architecture search approach to decrease the searching cost by introducing three modules to estimate uncertainty restore parameters and store old models compared to mnas it can significantly reduce the search cost up to x10 while giving competitive accuracy this paper is generally wellwritten and wellmotivated except for some unclear sentences architecture knowledge is not well described compared to parameter knowledge the authors should clarify what they are and the difference between them in figure 4 it is unclear what the operators are and which operators are similar and different moreover details are missing on how to sample 100 optimal models in equation 1 the definition of the reward is missing lkp the acronym first introduced in page 5 is not described even if it accelerates the search process it entails additional memory due to the proposed module it stores learned networks so i think theres a tradeoff between search cost and the total memory we need to reserve from this i wonder reducing the search cost is more significant compared to increase the required memory in table 3 why fnas has higher flops than mnas this should be properly elaborated in table 4 the cases using two modules are missing it would be great to see the results to see which component actually affects the performance ### Summary:
this paper presents a compelling mechanism for reducing the neural architecture search process based on accumulated experience that the reviewers found compelling with significant improvements in performance this is an intriguing idea however there were concerns about clarity that need to be addressed and more concerning the paper lacked technical depth or details in several aspects described in the reviews the authors subsequent response and revisions have somewhat addressed these issues the reviewer discussion had mixed opinions with some for weak acceptance and others for weak rejection there were compelling points that the contribution is significant but overall this paper would benefit from thoroughly addressing the shortcomings mentioned in the reviews before it is ready for publication
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a nas optimization algorithm for snn search the authors present interesting results with the differentiable nas search there are two major works related to nas for snns that has been recently out 5 11 the authors have not cited these works it makes me wonder what is the authors contributiona s compared to these works 5 talks about the fact that training snn using standard nas methods might be too complex because snns need large training time so they come up with a nas without tarining technique 11 talks about a differentiable nas technique both works show good results on a avraiety of datasets and talk about the intricacies of architecture search the authors have also compared their technique to select works in table 1 there is a lot of work from priya pandas group at yale emre neftcis group and many others with regard to snn training that show sota results on dvs and static datasets the authors have failed to acknowledge most recent works below is a list of publications not exhaustive that the author should check 1 towards spikebased machine intelligence with neuromorphic computing k roy a jaiswal p panda nature 575 7784 607617 2 enabling spikebased backpropagation for training deep neural network architectures c lee ss sarwar p panda g srinivasan k roy frontiers in neuroscience 119 3 rate coding or direct coding which one is better for accurate robust and energyefficient spiking neural networks y kim h park a moitra a bhattacharjee y venkatesha p panda icassp 20222022 4 neuromorphic data augmentation for training spiking neural networks y li y kim h park t geller p panda arxiv preprint arxiv220306145 5 neural architecture search for spiking neural networks y kim y li h park y venkatesha p panda arxiv preprint arxiv220110355 6 optimizing deeper spiking neural networks for dynamic vision sensing y kim p panda neural networks 144 686698 7 federated learning with spiking neural networks y venkatesha y kim l tassiulas p pand ieee transactions on signal processing 2021 8 beyond classification directly training spiking neural networks for semantic segmentation y kim j chough p panda arxiv preprint arxiv211007742 9 revisiting batch normalization for training lowlatency deep spiking neural networks from scratch y kim p panda frontiers in neuroscience 1638 10na byunggook et al autosnn towards energyefficient spiking neural networks arxiv preprint arxiv220112738 2022 see weakness section docsepthis work is aimed to search for both the optimal snn architecture and hyperparameters of surrogate gradient sg functions in the architecture search phase they use darts and refine the search to different granularities layerlevel and celllevel the search for sg function dgs focuses on optimizing the temperature of the dspike sg function the results show that searched architecture achieve sota performance on image classification and eventbased stereo matching task pros 1 the search for the architecture alone significantly increases the performance of image classification tasks which reveals the potential to be applied to various more complicated tasks 2 the idea of searching hyperparameter of sg function is novel simple but effective cons 1 the idea of applying nas on snns is not novel till the deadline of neurips submission snasnet1 and autosnn2 have proposed that nas methods can be used for searching the structure of snns the latter has been accepted at icml2022 2 the articulation of the training pipeline is not highlighted and is somewhat unclear to me see the questions below 3 the trials of search on sg functions are confined to the dspike function 1 youngeun kim et al neural architecture search for spiking neural networks arxiv preprint arxiv220110355 2022 2 byunggook na et al autosnn towards energyefficient spiking neural networks arxiv preprint arxiv220112738 2022 na docsepin this work the authors propose a differentiable hierarchical search framework for spiking neurons where spikebased computation is realized on both the cell and the layer level search space meanwhile the authors find effective snn architectures under limited computation cost in order to avoid the standard sg approach that leads the network into suboptimal solutions the authors propose a differentiable surrogate gradient search method where the sg function can be efficiently optimized locally in parallel finally this work shows some interesting results on the image classification tasks strengths 1 a hierarchical differentiable surrogate gradient search framework is proposed to obtain better performance of the spiking model 2 significant improvements in energy savings on deep stereo weakness 1 in terms of writing some methods that were not proposed in the work were placed in the methods section there are also some typos in terminology 2 the results of the ablation experiments and the analysis of some elements do not match 3 the font of the figure seems to be a small and not clear enough which leads to a very careful reading to find valuable information 4 the percentage improvement of the proposed method varies greatly on the two image classification datasets even the improvement on cifar10 is only 018 the authors illustrate the limitations of their work docsepin this submission draft the authors device a differentiable hierarchical search framework tailored for snns in the meantime this framework is able to search the surrogate gradient in a differentiable manner their methods are validated on the cifar dataset and an eventbased deep stereo dataset overall this is an interesting work the authors come up with an endtoend differentiable framework that solves two critical problems in snn the architecture and the surrogate gradient 1 developing snnoriented architectures are novel and necessary even though this work is not the first trial in the community 2 searching the sg is interesting and i am glad to see a learningbased method to address the issue 2 the results on the cifar10100 dataset are promising 1 need to include two prior snn nas papers in the discussion or experiments see references below 2 a critical problem is that there is no comparison between the searched architecture and the resnets used in other works what if the searched architecture has a higher capacity than resnets 3 an ablation study on the dgs is recommended the authors should compare static temperature gradient dgs and 31 on the same neural architecture and under the same training receipt 4 better to have an imagenet result references na b mok j park s et al autosnn towards energyefficient spiking neural networksj arxiv preprint arxiv220112738 2022 kim y li y park h et al neural architecture search for spiking neural networksj arxiv preprint arxiv220110355 2022 ### Summary:
this paper proposes a new architecture search algorithm for spiking neural networks snns the key insight is to optimize both the cell and the architecture level of the snn convincing numerical results are provided on image classification tasks cifar10 cifar100 and an eventbased stereo task one concern raised by the reviewers regards the comparison to existing work some of which appears to be very recent this point is raised by all the four reviewers although it has led to a rather large variance in their initial assessments after an indepth discussion between authors and reviewers and a discussion between ac and reviewers as well it appears that this concern has been addressed in a satisfactory way other concerns eg training pipeline and versatility by reviewer cjsq have been also resolved and the remaining ones measuring energy accurately as mentioned by reviewer lhuf and computational overhead on neuromorphic hardware as mentioned by reviewer huzc have been regarded as out of scope in summary the reviewers have found the authors response convincing and have reached a consensus towards accepting the paper after my own reading of the manuscript i agree with this assessment and i am happy to recommend acceptance as a final note i would like to encourage the authors to include in the camera ready the discussions related to the feedback from the reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a very preliminary first step into designing a foregroundbackground cnn that is robust to adversarial attacks the authors of the paper have not properly quantifiedwhile the intention of the paper is good this paper unfortunately does not meet the barstandard for an iclr submission and may also be reporting misleading results given the correctness of how the attacks were computed this papers main weaknesses are no use of errorbars or confident intervals in the adversarial attacks or blur based perturbation section 23 gaussian blur is not a type of adversarial attack it is an outofdistribution type of image distortionmanipulation authors should have used pgd based attacks to strengthen their claims authors should expand on using different cnnbased architectures clarity it is not obvious how the forebackground networks are partitioned into separate streams and then unified to be fully endtoend differentiable most importantly i am not convinced the results here are veridical given the way the adversarial attacks have been made it seems by figure 1 that the fusion network is not endtoend differentiable if it is not end to end differentiable then how is the gradient computed for the fgsm attack to actually maximize the loss maybe i missed something overall the idea of using parallel foregroundbackground networks is appealing for adversarial robustness but there are still some missing works i encourage the authors to look into putting visual object recognition in context zhang tseng kreiman cvpr 2020 human peripheral blur is optimal for object recognition pramod kitti arun arxiv 2020 emergent properties of foveated perceptual systems deza konkle arxiv 2021 the figures in general could all use more work while i find the idea interesting and i like the direction the authors are going this work is still quite preliminary and needs more work docsepin this paper authors studies the problem of adversarial training and tries to leverage a fusionbased method against adversarial attacks this method fuses features from foreground and background extracted by pretrained models and test its performance against both gaussian blur and gradientbased attacks the authors claim three main explorations exploring the effects of adversarial attacks on both context and object feature space exploring the benefits of fusing different modalities against adversarial attacks exploring the benefits of context features strengths robustness to adversarial examples is a hot topic within the ml community however relatively less attention has been spent on the explorations of fusion based models against adversarial attacks therefore i believe that the main focus of this paper is very relevant to the iclr community weaknesses discussion questions the paper exceeds the page limitation which is not fair for other submitted manuscripts within the page limitation the integration of fusion and adversarial learning should be a very interesting topic to be studied with the contribution of this paper seems to be making explorations within this domain then the authors should point it out explicitly instead of saying summary of our approach in section 14 and the paper is not making clear what contributions are novel and what is from existing work i think it would be better to separate a related work section from the introduction and describe more prior work of making fusion networks against adversarial robustness eg 1 and the differences between your method and other methods for clarity then in the methodology section authors describe how to leverage and fuse pretrained models and test its performance against adversarial attacks the authors are suggested to highlight your proposed method otherwise it is more like a technical report with lack of novelty in my opinion the contribution of this work is not enough authors are suggested to gather more prior work redesign the experimental settings compare with other related methods and demonstrate its performance against stronger attacks eg pgd cw and aa the paper seems to be written in a rush there are several format errors typos grammatical errors and sentences that fail to convey ideas the writing of the paper needs polishing all the citations are mixed with main text making the paper not easy to follow figures are blurred for examples the text in figure 4 5 is not clear and being stretched 1 yu et al towards robust training of multisensor data fusion network against adversarial examples in semantic segmentation ieee icassp 2021 this paper studies the problem of adversarial training and tries to leverage a fusionbased method against adversarial attacks the integration of fusion and adversarial learning should be a very interesting topic to be studied with however i have concerns about the technical quality the novelty of the manuscript and the violation in page limitation all of these lead me to recommend its rejection docsepthe paper tackles the adversarial example problem the authors propose an approach that is motivated by the way biological systems employ multimodal information to recognize category of objects specifically the approach combines two pretrain models that are excepted to focus on foreground and background respectively then the foreground module is finetuned for downstream tasks while the background module is left unchanged the authors demonstrate that they obtain better performance against blur and fgsm there are three major weaknesses in this paper 1 the pretrained models selected for recognizing foreground and background are not convincing there is no proof that the one trained on imagenet can be used as a foreground objects detector if you ever checked the detailed class labels of imagenet you will know there are many classes that are similar to place365 viceversa 2 the novelty is limited the method can be seen as an ensemble of different models moreover the fgsm is only targeting the foreground module leaving the background module untouched 3 the experiments are weak including the selection of datasets and attack methods due to the weaknesses mentioned i recommend to reject this paper docsepthis work proposed to enhance the robustness of dnns by fusing context information from the background it first studied the blur effects to the foreground and backgroundbased dnns and observed that fusing the two information helps accuracy improvements under different blur effects then it further extends to the adversarial attacks via fgsm and observes the advantages of using background information on mscoco and cifar10 datasets finally it proposed a regularization method to reweigh the foregroundrelated weights during training i have the following concerns 1 the idea for enhancing the adversarial robustness via foreground and background is not novel and has been studied in a a has similar conclusions but with a more challenging attack eg pgd instead of the fgsm 2 it is not clear why choosing mscoco dataset as a subject dataset commonly used datasets for adversarial attacks are the imagenet and cifar datasets why not use imagenet dataset 3 why choose gassuain blur as a perturbation note that recent work b has studied the adversarial attack from the angle of motion blur in contrast to gaussian blur the adversarial motion blur could fool dnns via gradient information like the traditional noise attack 4 all figures show obvious distortions and there are a lot of typos this work may be a rush to the deadline a towards robustness against unsuspicious adversarial examples b watch out motion is blurring the vision of your deep neural networks neurips 2020 overall the main concerns of this work are the novelty and unclear experimental setups ### Summary:
this manuscript proposes an information fusion approach to improve adversarial robustness reviewers agree that the problem studied is timely and the approach is interesting however note concerns about the novelty compared to closely related work the quality of the presentation the strength of the evaluated attacks compared to the state of the art among other concerns there is no rebuttal
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper formulates the disentanglement problem from an information theoretic perspective but focusing on an objective that encourages a compositional disentangled feature space on the layers that precede the final latents with objective the authors describe a new method using gate of mixtureofexperts to implement the compositional disentangled reconstruction objective some of the terms require mutual information estimation for which they use mine estimators they run experiments across dsprites and 3dshapes and look into reconstruction error and different disentanglement metrics observing that they method outperform existing betavaelike baselines without any compositional incentives they also analyse the loss components with different architectures and observe that degrees of compositionally in the architecture yields better disentanglement finally they look into some ablations of the regularisation pressure and into data efficiency in downstream tasks overall i am pretty happy with the paper its mostly well written and organised 1 positives session 2 compositional disentanglement learning is well organised and sets up the scene well for the method in session 3 good level of implementation detail is available such as architectures estimator used etc the experiments are well conducted and common mistakes were avoided afaict use of standard datasets and metrics well stablished in the field 2 for improvement there has been some progress in the use of hierarchical vaes which can be interpreted as applying disentanglement regularisation to other layers and making it compositional in a similar fashion to this work eg nvae a vahdat j kautz 2020 i would be a bit more careful with the tone on claims about the requirement of compositionality for disentanglement figure 1 is only an evidence in a toy example not an actual demonstration so statements as fig 1 shows when is not effectively disentangled session 1 and to achieve better disentanglement between their input feature sets are expected to be disentangled as demonstrated in our case study session 22 could be watered down a little its unclear if there are benefits from using mine and the architecture in some of the experiments for example if i understood correctly the betavae objective yields better metrics on figure 6 than in figure 3 the paper left me wondering what are the disentanglement metrics on the preceding layers looking into migsapdcid on ml1 and ml2 seems like a straightforward analysis that should be in the appendix or even in the main paper some ablations on the architecture itself also seem to be missing ablations on the loss are good my intuition is that gate of mixtureofexperts fits quite well with the disentanglement that we want because of the top for example just learning a linearsoftmax instead of the router no routing at all just some fixed assignment eg split ml in d1 equal slices and pass through the encoders using a transformer instead of the recursive modules perhaps over fixed slices as well my main concern is the discussion of related hierarchical models missing from related work and the emphasis on this being the only work to apply some disentanglement pressure outside the main z latents this should be an easy fix for this paper the compositional objective is interesting and novel and the implementation method is clean the experiments were well conducted and the well analysed overall i am confident that the authors will be able to address the main issue above and that this paper will award acceptance in this venue docsepthe paper proposes a new approach for learning disentangled variational autoencoders in addition to pushing the sufficiency minimal sufficiency and disentanglement of the latent representation the paper proposes to also regularize those on earlier features in the network experiments demonstrate promising results overall the idea is technically sound and the results look promising i have some questions and suggestions and hope the authors could clarify them during the rebuttal section 21 what is the meaning of defining markov chain as hatx x z given that the generative process is actually x z hatx i looked at the cited work achille soatto 2018 and they seem to discuss a different setting where they have a dataset of data points x and associated labels y and in that case y x z makes sense to me definition 5 similarly what is the practical meaning of mjl1 x milmjl figure 6 left do you use compositional objective section 22 and recursive disentanglement network section 3 in this experiment if so lambda20 is not equivalent to betavae as the vanilla betavae does not have the compositional objective figure 2 the wjl in green should be wdl1l i would suggest adding ganbased approaches into the comparison tables this would be very helpful for readers who want to pick techniques for their downstream applications the decomposition and discussion of the losses around eq 2 and table 1 have been partially discussed in prior work eg httpsarxivorgabs170602262 and i believe it is not your key contribution i would suggest highlighting this fact better as it sounds like these are your discoveries from the current writing in summary the idea is interesting and the results look promising i hope the readers could clarify these questions and i will adjust the score accordingly docsepthe paper presents a vae variant that disentangles the features of the inference network at every layer with disentanglement defined in terms of mutual information between features the approach is implemented as recursive disentanglement network based on a switch network aka mixtureofexperts gate introduced in shazeer 2017 and used in the switching transformers the results in dsprites and 3dshapes dataset suggest this variant performs better than wellknown disentanglement vae networks from few years back in dsprites and well in 3dshapes though not the best in all measures in terms of vae loss the approach is presented as a generalization of various other disentangling vaes strengths the paper presents an approach that is theoretically justified and implemented in a convincing manner the results appear convincing and beating relevant baselines though with the disclaimer that mostly these baselines seem rather old by now some of the relevant comparisons from the last 2 years might be missing but i cannot name any empirical evaluations are sufficient though only barely except for the ablation weaknesses i suspect that including layerwise disentanglement has occurred to many people before but it has not been attempted due to the computational burden that said i have not seen anyone actually try it it is unclear whether this approach solves the computational burden and scales to more complex datasets and larger image resolution i am not convinced that the loss formulation eq 4 is that significant wrt infovae for example what would the authors say against the criticism that eq 4 is basically just reshuffling the infovae loss im not saying it is but it would be helpful if the authors could shoot down this potential concern it is unclear to what extent does the performance originate from the loss equation 4 and to what extent does it come from the switchbased architecture could the authors clarify this it would seem that one could implement eq 4 loss without the switch architecture the paper appears to simultaneously give a clear slightly novel generalization to disentanglement losses of prior vae variants and to provide an architectural approach to implement their approach i have concerns about the scalability and i am suspicious about whether such a heavyhanded disentanglement can be maintained in larger models it also was not clear to me whether the results originate from the loss or the architecture advances which build on existing switching architectures however either way i think this is a novel approach and potentially a significant addition to the vae disentanglement research and i lean towards acceptance in this case though i urge the authors to address my questions docsepthis paper proposed a recursive disentanglement network recurd for the learning of disentangled representations from information theoretic perspective the experimental results show recurd outperforms some existing baselines on two benchmark datasets pros 1 developed a compositional disentanglement learning called recurd that directs the disentanglement learning process across the compositional feature space 2 provide some theoretical analysis based on information theory cons 1 optimizing the lower bound of eq2 does not mean obtaining the optimal objective function of betatcvae paper 1 on the righthand side of eq 2 as far as i know the righthand side of eq 2 is the objective of betatcvae if we optimize the objective function on the lefthand side it does not hold for optimizing betatcvae thus i am afraid that the proposed objective function in eq 1 fails to be generalized to the existing betatcvae and factorvae in contrast optimizing the objective of betatcvae is approximately equivalent to optimizing the proposed objective function in this paper what is more in table 1 lambdac 1 for the original betatcvae in their paper 2 do not specify the number of gate of encoders goe for different datasets it is hard to know how many goe should be used for a new dataset like celbea that contains 40 latent variables also i am curious about the complexity of the proposed network 3 the upper bound and lower bound are confounding on page 9 the author said mutual information ixz is the upper bound of kl divergence in fact based on the proof in prior work 2 ixz is the lower bound of kl divergence 4 the markov chain in eq 7 is incorrect based on my understanding the next state xt1 of markov chain is only related to the current state xt hence the joint probability of pabc papbapca rather than papbapcb this is because c is conditionally independent on b as you mentioned bac 5 missing baselines in recent years the author only discussed and compare the proposed method with the baselines before the year of 2019 there are some recent works 3 4 5 on improving the disentanglement and reconstruction error for instance controlvae 5 dynamically tunes the weight beta on the kl term to achieve a good tradeoff between disentanglement and reconstruction error 6 did not conduct experiments on complex datasets the authors should do experiments on 3d chairs or celeba to demonstrate the good performance of the proposed method 7 the result in fig 5 does not perform well we can observe that for orientation and scale factors they are slightly entangled besides the reconstruction quality is not as good as controlvae and factorvae in the paper in particular controlvae and factorvae can disentangle both 5 latent factors which are better than those in this work 8 there are some typos in this paper please proofread this manuscript for instance priopr work prior work on page 5 reference 1 chen r t li x grosse r duvenaud d 2018 isolating sources of disentanglement in variational autoencoders arxiv preprint arxiv180204942 2 xue bin peng angjoo kanazawa sam toyer pieter abbeel sergey levine variational discriminator bottleneck improving imitation learning inverse rl and gans by constraining information flow iclr 2019 3 patrick esser johannes haux bj rn ommer unsupervised robust disentangling of latent characteristics for image synthesis iccv 2019 4 srivastava akash yamini bansal yukun ding cole hurwitz kai xu bernhard egger prasanna sattigeri josh tenenbaum david d cox and dan gutfreund improving the reconstruction of disentangled representation learners via multistage modelling arxiv preprint arxiv201013187 2020 5 shao h yao s sun d zhang a liu s liu d abdelzaher t 2020 november controlvae controllable variational autoencoder in international conference on machine learning pp 86558664 pmlr i think the authors have addressed most of my concerns i will increase the final rate ### Summary:
this paper proposes an algorithm for achieving disentangled representations by encouraging low mutual information between features at each layer rather than only at the encoder output and proposes a neural architecture for learning empirically the proposed method achieves good disentanglement metric and likelihood reconstruction error in comparison to prior methods the reviewers think that the methodology is natural and novel to their knowledge and are happy with the detailed execution the authors are encouraged to improve the presentation of the paper by providing rigorous formulation of the markov chains to avoid confusions justification of the independence assumptions behind them and more indepth discussions of the learning objectives
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper considers the exploration efficiency issues in offpolicy deep reinforcement learning drl the authors identify a sample efficiency limitation in the classical entropy regularization which does not take into account the existing samples in the replay buffer to avoid repeated sampling of previously seen scenariosactions the authors propose to replace the current policy in the entropy term with a mixture of the empirical policy estimation from the replay buffer and the current policy and term this approach as sampleaware entropy regularization the authors then propose a theoretical algorithm called sampleaware entropy regularized policy iteration which is a generalization of the soft policy iteration spi algorithm and show that it converges assuming that the empirical policy estimation is fixed a practical algorithm based on the sampleaware entropy regularized policy iteration called diversity actorcritic dac is then proposed this algorithm is a generalization of the wellknown soft actorcritic sac algorithm finally numerical experiments show that dac outperforms sac and other sota rl algorithms and some ablation studies are also provided to demonstrate the effect of hyperparameter choices in dac in general the approach is novel to my knowledge and the high level idea of using mixed policies in the entropy regularization to avoid repeated sampling and encourage unseen scenariosactions is also interesting and reasonable however there are some clarity and technical issues that should be addressed and improved as listed below 1 the authors study finite horizon mdps for which the optimal policy should be nonstationary in general however the authors only consider stationary policies instead the authors should either change the underlying setting to infinite horizon mdps or consider nonstationary policies 2 in 2 st should be replaced by an arbitrary s in the state space otherwise there may be contradicting definitions of the policy q if st and st are equal for some two different timestamps t and t and in 3 it is better to write the qrm targetpialpha in the entropy term as qrm targetpialphacdotst to be consistent with 1 3 its not very clear why the authors propose to estimate rpialpha with some neural network parametrized ralpha the authors mention that one can only estimate rpirm oldalpha for the previous policy pirm old in practice however since in rpialpha all the quantities including pi q and alpha are known im confused why one cannot evaluate it directly on a related point its not very clear why the estimation procedure for eta the parameter of ralpha using hat jralphaeta makes sense the form of hat jralphaeta looks like an entropy term extracted from the jpirm old function but its unclear why maximizing it gives a good estimation of rpialpha some more explanations are needed 4 there seem to be several errors at least inaccuracies in the proof of theorem 1 in the appendix firstly in the proof of lemma 1 the term correctly estimates is not very accurate and should be simply stated as something like equals also its not very clear when the assumption ralphain01 can be guaranteed eg using gaussiansoftmax policies secondly in the main proof of theorem 1 convergence of qpii to some qstar is correct but this does not immediately imply convergence of jpii let alone the convergence of pii to some policy pistar on a related point the proof for the optimality of pistar in terms of j is not clear in particular it is not clear why 7 and lemma 2 implies the chained inequality jpirm newpirm newgeq jpirm oldpirm newgeq jpirm oldpirm old i understand that the authors may feel that the proofs are similar to that of spi but indeed there are several significant differences eg the definitions of pirm new and jpi more rigorous proofs are needed for these claims 5 in section 5 it is unclear why the authors need to include the parameter c how to choose it and what it serves for some additional explanations are needed 6 on a high level the eventual goal of the paper is not clearly stated from the experiments it seems that the average episode reward is the actual goal of concern however the problem setting and the theoretical results theorem 1 seem to indicate that the problem of concern is the discounted entropy regularized reward some discussion about this is needed finally here are some more minor comments and suggestions 1 in the analysis of the sampleaware entropy regularized policy iteration the authors assume that q is fixed however in practice especially in the long run as concerned in the analysis such an assumption will not hold even in just an approximate sense can you still obtain some sort of convergence when taking into account the q changes 2 why do you need to divide the reward and entropy regularization term in qpi by beta 3 its better to write out the binary entropy function h explicitly for clarity 4 at the beginning of section 43 propoed should be proposed and in section 5 a function st should be a function of st 5 some high level explanations on why the 1alpha term can also be dropped in 8 will be helpful 6 the theoretical results only show that the algorithm converges which is already guaranteed by spi is there any possibility to show that there is also some theoretical improvement so in short the paper proposes an interesting modification of the maxentropy regularization framework but contains several technical and clarity issues hence i think it is not yet ready for publication in its current form docsepthis paper proposes diversity actorcritic dac for exploration in reinforcement learning the main idea of the proposed algorithm is to take advantage of the previous sample distribution from the replay buffer for sampleefficient exploration the authors provide convergence analysis of dac and conduct empirical investigations on several benchmarks pros the idea of using previous sample distribution from the replay buffer for better exploration seems interesting the proposed exploration bonus mathcalhqpi alphatexttarget can be decomposed into three terms as shown in 4 since the last term does not depend on pi intuitively this exploration bonus encourages the exploration of pi first term and tries to make pi different with previous policies approximated by the replay buffer second term the authors provide a reasonable method to optimized the proposed objective which can be naturally combined with stateoftheart algorithms like sac cons 1 theorem 1 seems misleading the diverse policy iteration can only guarantee the converge to the optimal policy with respect to the regularized value function not the optimal policy of the original problem the authors should make the definition of pi clear 2 its hard to see the motivation of using a mixture of q and pi could you explain more about this choice 3 its worth to provide the results of sacdiv with js divergence as its more similar to the proposed objective 4 4 the experiment results are not convincing enough as some important baselines are missing for example 1 also uses a mixture of previous polices to encourage exploration with strong theoretical guarantees i believe this is closely related to the proposed algorithms also the experiment results are not very promising compared with the baseline algorithms based on sac 1 hazan e kakade s singh k and van soest a 2019 may provably efficient maximum entropy exploration in international conference on machine learning pp 26812691 other suggestions the main idea of the proposed method is to make the current policy different with previous policies the paper uses a nonparametric method 2 to approximate the previous policies i think its also worth to try parametric q for example q could be learned by fitting the replay buffer or use a moving average of previous policies docsepsummary this paper proposes a novel exploration method in offpolicy learning compared to previous methods which do not take care into account the distribution of the samples in the replay buffer the proposed method maximizes the entropy of the mixture of the policy distribution and the distribution of the samples in the replay buffer hereby making exploration efficient reasons for score i vote for accepting the paper the paper proposes an intuitive and efficient exploration method that generalizes existing methods including them as special cases the authors provide a theoretical guarantee theorem 1 that the policy obtained from the iteration of evaluation and improvement under this new regime converges to the optimal policy the presentation is clear and concrete and the experiments are convincing pros the experiment results are not limited to just showing that the proposed method achieves higher reward than state of the art methods but they also address important questions such as i the pure exploration when rewards are assumed to be 0 i the necessity of the adaptation of alpha the parameter that controls the ratio of the current policy to the sample distribution in the target distribution ii the effect of controlling alpha the entropy weighting factor beta and the control coefficient c required for adapting alpha and also the robustness of the proposed method to these parameters the authors have stated the experiment details clearly and the results are convincing cons the methodology part in section 3 and 4 could be improved some notations are confusing a in section 3 the policy pi is defined as a function from s to a it looks like it is a fixed function over time b an explanation on the definition of jpi 1pi 2 would be helpfuleg jpi 1pi 2 is value of jpi2 computed under pi1 minor comments it would be good to add the line of sac and sacdiv in figure 5 c to show that the performance of dac with adaptive alpha is robust to control coefficient c for now one has to go back to figure 4 b to check that most of the case when c is not 0 dac with adaptive alpha performs better than sac and sacdiv in section 6 in the 5th line jpi should be specified as jpi in 1 it is done in the next sentence but i prefer that it is done when it first appears it was confusing docsep summary the paper proposes dac an actorcritic method exploiting the replay buffer to do policy entropy regularisation the main idea of dac is to use the data from the replay buffer to induce a distribution qcdot st and replace the entropy part of the soft actorcritic objective with a convex combination of q and pi this results positively on exploration properties and leads to sampleefficiency gains on some of the considered mujoco benchmarks pros formulating the diversity using the entropy of the replay buffer frequences is an interesting idea using the convex combination of q and pi for entropy regularisation is a nice way of generalising sac for the considered purpose the paper shows the convergence of their method to an optimal policy and derives a surrogate objective whose gradient direction coincides with the original one but which can be practically used however i have not checked the proofs which are in the appendix cons it is not clear what is the problem the paper tackles is it exploration is it a generic rl setup what kind of problems is dac good for if dac is for improving exploration then it should be compared with other exploration methods not with vanilla sac comparison with rnd should not be in the appendix and there should be more details on this related work in this case should have a paragraph on exploration methods in rl the paper is based on assumptions not challengedtested by the authors eg policy entropy regularisation is inefficient because it does not take the distribution of the samples into account the paper focuses more on the technical details of the solution rather than justifying the assumptions and making the research question clear reasoning behind the score i believe the paper has a great potential however at the moment i vote for rejection the paper has to have a clear research question and its motivation this should define the experimental part of the work lack of a clear positioning makes it unclear if the baselines of the experimental sections are the right ones and whether the claims have been properly supported by the results questions to the authors can you formulate the exact problem you are solving how can you justify the claim that entropy regularization is sample inefficient in offpolicy learning since it does not take the distribution of previous samples stored in the replay buffer into account it is preferable that the old sample distribution in the replay buffer is uniformly distributed why is it true doesnt prioritized experience replay refute this claim you define beta in equation 1 in 0 infty can it really be infinite the rationale behind this is that it is preferable to have as diverse actions stored in the replay buffer as possible for better q estimation in offpolicy learning what are the assumptions for this do you care more about better q estimates or finding an better policy faster how can you support your rationale in section 41 you define the target distribution as a convex combination of pi and q you assume that the buffer is generated by q does such a policy always exist what are the assumptions for this you prove the convergence of your algorithm i did not check the proof in the appendix what are the assumptions for which the convergence is guaranteed why do you use sparsedelayed mujoco benchmarks but not the original ones the variance across different seeds seems to be huge for your method as well as for the others what do you think is the reason behind this this also happens for the pure exploration task in 61 why do you think it happens for the adaptive alpha case you restrict the range of possible values what is the reasoning behind the left boundary i think your paper can find an important application in imitation learning or offline rl have you considered this are you aware of works which do something similar in those subfields additional feedback not affecting the score reinforcement learning aims to maximize the discounted sum of rewards should be expected discounted sum there should be a distribution over initial states under the expectation sign in 31 a is the continuous action space this is not true for the general mdp definition specify that this is specific for your paper section 31 a policy is a mapping from states to distribution over actions not to actions in offpolicy we can learn from any other samples not only from previous samples from our policy typo propoed at the bottom of page 4 equation 9 does not have a left hand side dac acronym has been used in rl i would choose a different one to avoid confusion ### Summary:
first id like to thank both the authors and the reviewers for extensive and constructive discussion the paper proposes a generalization of sac which considers the entropy of both the current policy and the action samples in the replay pool the method is motivated by better sample complexity as it avoids retaking actions that already appear in the pool the paper formulates a theoretical algorithm and proves its convergence as well as a practical algorithm that is compared to sac and sacdiv in continuous sparsereward tasks generally the reviewers found the method interesting after rounds of discussion and revisions the reviewers identified two remaining issues theoretical analysis still requires improvement and the positioning of the paper is not clear particularly the method is motivated as an exploration method and it should be evaluated as such for example by comparing to a more representative set of baseline methods therefore im recommending rejection but encourage the authors to improve the work bases on the reviews and submit to a future conference
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper introduces a sensorfusion approach that provides interpretable intermediate representations of the world scene the approach can fuse multiview rgb images along with lidar scans the network architecture involves a cnn backbone which feeds into a transformer encoder for fusion a transformer decoder generates an object density map a set of waypoints for the ego and a set of rules to be enforced such as traffic lights which are then fused in a safety controller to generate openloop actions strengths 1 i like the idea of training the perception pipeline with the planning and control modules in tow this would allow the perception pipeline to extract the features that are relevant to behavior generation and control 2 the approach provides interpretable perception outputs which would be a great asset for verification of the correctness of the decisions made by the downstream planning and control modules modulo the correctness of the perception outputs 3 the experimental results are strong first rank in carla leaderboard in driving score and thorough extensive ablation studies that do give reasonable insights such as the importance of fusion weaknesses 1 in a simulator the data collection for training was easy because direct access to the scene ground truth is available however perfectly annotating the object density map for training from real driving logs will be very challenging 2 the planning and control modules in an actual av might be significantly more complex than the safety controller used in this paper with multiple layers and potentially nondifferentiable components 3 the paper suffers from some grammatical errors which can be fixed clarification question 1 in the context of control line 109 says rulebased methods hardly scale to complex environments due to the extensive human labor required what human labor is being discussed here also could you cite a reference which suggests rulebased methods fail to scale this paper helou bassam et al the reasonable crowd towards evidencebased and interpretable models of driving behavior iros 2021 seems to suggest otherwise 2 why does picking the local maximum for object probability in the map help with identifying objects with high position uncertainty as suggested in line 198 docsepthe authors present an interpretable autonomous vehicle av policy which features a sensor fusion transformer the authors develop a transformer encoder which takes in which uses multiple camera viewpoints and lidar additionally a transformer decoder which output waypoints an object density map and traffic rules in order to determine a nominal trajectory and use a safety controller to adjust the velocity of the planned trajectory the resulting av policy beats state of the art methods in carla leaderboard and benchmarks strengths justification for methods are well argued and the paper is accessible the results on the carla leaderboard and benchmarks are impressive and show the potential of the method for more realisticlarger scale settings the inclusion of code for the method is a great strength and i hope it is made public upon publication this will allow follow up work to more easily compare to this strong method and in a straightforward way and extend portions of the method such as with different safety controllers as the authors mention on l210l11 weaknesses the addition of an ablation study is important in understanding the importance of the choices the authors made in the resulting method however as it stands the interpretation of the ablation study and tables 2 and 3 are quite confusing and i am not sure how to view the results the authors seem to have a different interpretation of the table than i do but this is possibly due to a mistake in the table the overall approach is very handcrafted and there are many possibly nonobvious choices which must be made eg l554l555 cyclists and pedestrians bounding boxes are scaled up but not vehicles i believe this is common for largescale learning for avs but this does somewhat weaken the approach it is possible that the good performance is due to these many choices and the careful cost function shaping instead of the transformer and interpretable outputs docsepthis paper proposed a transformer fusion architecture for controlling autonomous driving agents various image and lidar inputs are processed by cnns and fused by a transformer encoder which is followed by a transformer decoder to output driving action and auxiliary outputs evaluation results on various benchmarks including the public carla leaderboard demonstrate the effectiveness of the proposal strengths 1 this paper covers an important topic 2 the proposal achieves impressive performance 3 the writing is generally clear and the diagrams are helpful for understanding the architecture weaknesses my main concern about this paper is the lack of more careful analysis of safety and interpretability the two main benefits claimed by this paper these two concepts are very related in this paper as safety is ensured by the interpretable intermediate outputs generated by the model such as inferred traffic state information however since these outputs are generated by equally noninterpretable black box model actually the same transformer decoder so this set up seems more like an auxilliary loss setup rather than providing any kind of interpretability or safety guarantee for safety it is mainly tackled by verification as already discussed in the related work or barrier functions eg 1 however the safety notion strongly depends on the quality of the intermediate output prediction which seems hard to offer any guarantee on the interpretability side i wouldnt call such an architecture interpretable specifically it is not clear what additional benefits the explanation ie intermediate outputs offer in terms of understanding the model because the explanation is generated along side the action predictions they do not need to be coupled in anyway so that for example the action could be the drive forward prediction even if a red traffic light is also predicted for claims on interpretability i would like to see some concrete evidence such as helping with model debugging 2 3 improving humanmodel collaboration 4 or some other use cases 5 at the very least some more careful analysis of the intermediate output is needed to understand when they can help and when they cannot then the lack of rigorous studies of safety and interpretability could be acknowledged in the limitation section 1 httpsarxivorgabs210906689 2 httpsarxivorgabs210414403 3 httpsopenreviewnetforumidxnovfccvdpm 4 httpsarxivorgabs200614779 5 httpsdlacmorgdoi1011453511299 ### Summary:
this paper proposes a new sensorfusion approach that provides interpretable intermediate representations of the world scene and a safetyenhanced feature for autonomous driving the authors propose to fuse multiview rgb images along with lidar scans the feature extraction part is also enhanced with the planning and control modules the reported experiment results are promising and strong ie first rank in carla leaderboard in driving score and accompanied with an extensive ablation studies the justification for the method of fusing multiview rgb images and lidar scans are well articulated the authors have greatly clarified open questions from the authors regarding the safety and interpretability these additional details would be very helpful to understand the paper and its impact the authors should consider to add them to the final version or its appendix
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents hcm an approach for chunking a sequence of data into a hierarchical representation more specifically hcm learns a tree with atomic units ie the lowlevel inputs in this case integers representing things like text characters or quantized pixel values as the leaves and increasingly complex groupings of them higher up the tree hcm learns by iteratively parsing the provided data ie stream of tokens in each pass computing marginals for the current set of chunks as well as transition frequencies between them after updating its marginals and transition frequencies the two chunks with highest joint probability are combined into one the process continues until all pairs of chunks pass an independence test i believe the main contribution of this paper is in that it presents an idea for interpretable grouping based on the principle of grouping by proximity from cognitive science and a largely qualitative proof of concept for it strengths i believe the papers main strength lies in its motivation i believe the core of the presented idea is compelling and would be of interest to the community the paper is clearly written and the method is simple weaknesses the paper presents primarily qualitative results for the majority of datasetstasks used the experiment performed on a text corpora only presents a table of examples with learned chunks and the visualtemporal experiment only presents a figure with some of the learned visual chunks it is not clear to me from the presented experiments how to compare this method to alternatives there is one experiment comparing against an rnn baseline showing that hcm converges faster however rnns are not the current sota in sequence modeling ie why wasnt a transformer model used i am concerned that the method as currently defined cannot generalize to real world data hcm parses chunks from a sequence by matching them exactly to subsequences which to me means that this method groups segments together purely based on form rather than semantics my perspective is that the promise of hierarchical representations is that you can decompose complex objects and patterns into their parts eg a person into head arms legs head into eyes ear nose etc however in modalities such as vision the same parts can appear with drastically different color values the paper alludes to this in its discussion section but does not present a solution to this problem which is something that i think would need to be shown related to my point above im not entirely sure i understood the thesis of the paper in terms of the narrative it is trying to convey and would appreciate hearing the authors thoughts on this is this meant to be received as a paper for the cognitive science community showing an operationalization of grouping by proximity or is it being presented for the machine learning community as a representation learning method for use in downstream tasks if its the former i believe this work would be much more appropriately submitted at a cognitive science conference if its the latter i believe much more empirical evidence of the learned representations usage needs to be shown the related work section mainly focuses on historical nlp methods with little discussion over similar methods in computer vision which i believe is needed given that it presents experiments on visual data i would suggest works such as normalized cuts and image segmentation by shi and malik 2000 selective search for object recognition by uijlings et al 2012 as places to start additionally i think work on unsupervised grammar induction could also be relevant here although i believe the motivating idea is very compelling i dont believe this paper is ready for publication in summary i believe the paper currently lacks more thorough empirical evaluations comparing against other methods experiments showing the methods potential for generalizing to more naturalistic data as well as its usefulness for downstream tasks a more clearly focused narrative motivating why its appropriate for a venue like iclr as opposed to a cognitive science publication as well as more thorough contextualization among related work particularly comparing against recent alternative methods for this problem i thank the authors in advance for their response and am also interested in seeing other reviewers thoughts docsepthe paper proposes a graphlearning model hcm for learning hierarchical chunks from sequential data the paper first proposes an idealised hcm method for which the paper provides learning guarantees via a proof by induction and an online approximation to this idealised method which is more computationally feasible and which is used to perform experiments in temporal visual visuotemporal and language sequential data domains the paper demonstrates that the online method learns interpretable chunks at multiple levels of abstraction and demonstrates positive and negative transfer to other hierarchically structured environments with similar and different structures strengths the paper is very well written with very clear intuitive explanations for how their method works and justifications for the authors design choices the paper provides several wellconsidered experiments to demonstrate the hcm method quantitatively and qualitatively first purely sequential data is generated from several random but known heirarchicallystructured graphs and the hcm method is shown to learn this underlying hierarchical structure well compared to a vanilla rnn secondly the paper verifies that the learned model shows positive and negative transfer to similarly or differently structured heirarchical environments as might be expected from a chunk learning algorithm fianlly the paper explores how the hcm model performs qualitatively in spatial spatiotemporal or englishlanguage chunking with interpretable although unquantified results in each the connections to animal chunk learning are well thought through interestingly for the case of spatiotemporal chunking without considering a priori the spatial proximity of pixels spatially connected chunks are learned so it is by virtue of the fact that objects tend to move smoothly in space and time that online hcm will learn to group visual spatial chunks smoothly in the height x width plane too this has really interesting close ties to theories for animal learning of object permanence although obviously the implementation is very different as the authors note weaknesses the paper mentions that this method should offer more interpretable learned representations but for what sort of task or application is this envisaged regarding the transfer of learned chunks to new data sequences it seems that a human or other model would have to know the underlying generative process of the target data sequence in order to know whether the original learned chunking model should work well in the new setting or not unless of course the data is generated from the exact same process as the training data if a human or other model knows that then is it not true that you dont need the model to do the chunking in the first place it would have been nice to see quantitative demonstrations of performance for the spatial spatiotemporal and languagechunk learning experiments i appreciate its not immediately obvious what the right metric for this performance would be at least to me but if the authors were willing and able to find an appropriate one and use this to compare their method to other chunklearning algorithms it would definitely strengthen the paper in the learning plots vs the vanilla rnn the paper would also have benefitted from comparisons to other explicit chunklearning algorithms a wellwritten description of a method for a chunklearning algorithm with learning guarantees and qualitative demonstrations of sensiblelooking chunks across a variety of domains quantification of results was a bit lacking docsepthis paper proposes a method for learning representations of non iid data in terms of hierarchical sets of chunks inspired by cognitive theories of grouping by proximity these sets are assembled over time from the initial set of primitive data points by finding correlations between temporallyspatially sequential primitiveschunks and appending to the set the authors show that this learning method is tractable has convergence wrt hierarchicallydecomposable problems and learns intuitively and practically reasonable chunk sets strengths this paper is particularly wellwritten and understandable i appreciated the intuitive explanations of chunking in cognitive science and its extension to common machine learning use cases like language and visual data the examples of instances where hierarchical chunk learning could both help and hurt a learned model were wellchosen the figures effectively demonstrated the training process and the learned representations in each domain even the theorems were more interpretable than i typically see being subdivided and laid out piece by piece the method is reasonably novel and broadly applicable the paper shows hcm applied to temporal visual visuotemporal and language domains given a domain with some hierarchical structure a fairly reasonable assumption this method is able to find that hierarchy with some guarantees the learned hierarchy itself as the authors note in the conclusion could be applied to downtheline endpoints such as causal learning this method really leans into explainabilityinterpretability and could thus be more compatible in humanml frameworks weaknesses while the method is novel and seems to recover structure quite well the results are not as convincing as id like to lay this out given a toy generative hierarchical model hcm is able to more effectively predict sequences than a basic rnn particularly as the levels of hierarchy increased not to be too glib but i should hope so in an environment where the hcm representations overlap with the underlying model it outperforms a learnedfromscratch hcm while in the opposite case it underperforms the authors suggest that the nature of the hcm as compared to something like a dnn allows users to understand a priori whether their pretrained model will work well which i agree with in toy visual domains with and without temporal correlations hcm learns reproduces the underlying representations but how does its ability to reproduce the actual sequences compare with appropriate baselines finally hcm is applied to a corpus from the hunger games and is able to learn commonlyrepeated phrases over time my main concern with all of this is the lack of actual baselines i agree that the models are interpretable and useful but they arent applied to any previouslyused datasets or compared empirically to other sota methods hcm doesnt necessarily need to win in performance given its other advantages but id like to see whether its competitive on a related note the authors provide both idealized and online hcm algorithms even the online algorithm while theoretically tractable seems practically quite slow which i assume is why the chosen domains are simple while the online algorithm seems to work well for these domains i would imagine the loss of guarantees is more likely to be impactful in harder domains it was not clear to me how the chunks were generated until i read the independence tests section in the appendix and i think that this is too important to push out of the body of the paper it also introduces the hyperparameter of statistical significance p which isnt really discussed i like this algorithm and think it has potential i can see how it can be applied both to standard ml tasks but also how it could unlock a more symbiotic humanml collaboration through its interpretability the motivation and buildup from cognitive science is clear and all else aside because of its writing i felt this paper gave me a lot more valuable insights than most that said im just not convinced by the current set of experiments i cant glean how well hcm will actually perform vs baselines on standard datasets particularly the online variant and i suspect its not computationally that practical either with some of these comparisons added i think i could accept but for now its a reject from me docsepthis paper proposes a non neural system of parsing natural language text by chunking sequences to form hierarchical structures the algorithm strongly resembles classical parsing algorithms decisions about when to chunk a phrase into a constituent are based on chi2 tests of independence where a pair of chunks that are considered to be dependent are joined into a single constituent they test this chunking algorithm on natural language data against an rnn concluding that the classical parsing algorithm is more sample efficient in achieving a low kldivergence from the true sequence data they also provide some examples of how this algorithm can be applied to temporal image data or video the paper is clear i rarely had trouble following although i didnt understand that the decision to chunk was based on a chi2 test until i read the appendix which seems crucial i enjoyed reading about the relative sample efficiency of the classical algorithm vs the rnn though i would have rather seen a fair comparison with a treernn or some other system that involves latent tree structure as well as a comparison to other classical dependency parsers the application of a classic parsing algorithm to video was a nice adaptation the overall problem i had with this paper was the fact that it is presenting a classic parsing algorithm but contains no citations to any work from the age of classic parsing algorithms i found this lack of background disturbing because as far as i can tell this algorithm is a statistical stack based parser and the authors should have looked into whether they were reproducing existing work the problems they have with efficiency of their own algorithm are resolved by many statistical parsing algorithms even allowing partial parses as they do is a property in a number of non neural parses such as httpswwwcscmuedunschneidtwparserpdf a dependency parser for tweets by kong et al ironically i also had trouble looking for specific classical parsing algorithms to compare with this while reviewing because the literature has exclusively contained neural parsing algorithms for so long the general area of structured prediction is one that has a long history and the authors seem not have a particular background in the problem space i recommend reading slav petrovs thesis httpswww2eecsberkeleyedupubstechrpts2009eecs2009116pdf for a deep background on the topic from the age of classical parsing although the paper described the problem of a lack of inductive bias towards hierarchical parse structures there were no citations to the literature which attempts to resolve this problem treernns rnngs etc there was also no discussion of nonneural hierarchical algorithms for structured prediction on video eg structured prediction cascades which seems necessary in a paper with experiments on a non neural hierarchical algorithm for video beyond the lack of discussion of the existing field of algorithmic hierarchical parsing the discussion of limitations does confront the possibility of non projective grammars which cannot be covered by this sort of chunking how to relax the adjacency assumption as a grouping criterion to allow for non adjacent relationships to be chunked together remains an open challenge but does not to discuss it in terms that have been used historically in parsing or acknowledge the existing parsers that cover nonprojective cases i was somewhat confused by the decision to use the hunger games as a corpus for training natural language parsers on as there are a number of more common corpora that would have compared more easily to the existing literature the little prince ptb or wikitext come to mind i was confused by the reference to teh 2006 alone as an extension of ngram models given that there was no other discussion of backoff eg katz backoff or smoothing in ngram models which has a much longer history i was not surprised that introducing a parsing algorithm with a strong inductive bias was more sample efficient than using an rnn this phenomenon is the reason why for years nlp did not use neural networks until large quantities of data and compute became easily available minor please explain how the hypothesis testing works in the main text of the paper and not just in the appendix or at least emphasize appendix a in the main text of the paper while describing the algorithm typos they the way hinton 1979 should be a parenthetical but is instead inline citation questions how does catastrophic interference relate to gradient starvation this paper is missing significant background on classic hierarchical structured prediction because it is presenting a classical parsing algorithm without a single citation to preneural structured prediction as a field i believe that it is extremely similar to existing algorithms that are rarely in use today ### Summary:
this paper develops an approach to learning hierarchical representations from sequential data the reviewers were very positive about the overall approach finding it well motivated and interesting with strong potential and thought that the paper was extremely well written with clear examples throughout there was a good backandforth between the reviewers and the authors discussing several aspects of the paper and providing constructive suggestions for improvement in particular the reviewers suggested improvements in terms of independence testing comparison to further baselines further experiments and other improvements as detailed in the reviews the authors were extremely receptive of these suggestions which is to be commended and is very much appreciated and in a response state that they are planning to take the time needed to revise this paper before publication
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a new gradientbased framework for learning invariant mechanisms often called relations in the paper from data drawn for multiple environments data generating processes overall the writing is excellent and the central ideas are interesting and valuable a key idea of the paper is that training data drawn from different environments can be exploited to learn mechanisms that remain invariant across those environments while true this is unsurprising and wellestablished fundamental principles of causal inference known for decades at this point directly imply that different environments data generating processes with different interventions will allow identification of different sets of causal dependencies practical methods for such identification have been demonstrated using graphical models and relatively simple methods for parameterization of those models the paper could be improved by spending less time on the known results or at least making clearer connections to prior work and spending more time clarifying what is genuinely novel about the proposed ideas in addition the authors should make a greater effort to distinguish between central ideas and implementation details multiple times in the paper basic results from the causal inference literature are attributed to relatively recent papers eg peters et al 2017 including the special properties of the causal factorization and the idea of invariance of mechanisms in response to intervention these ideas can be traced back much further for example the basic idea of invariance to intervention so called autonomy or modularity has been known since at least the 1930s heckman and pinto 2015 note that in the language of frisch 1938 these structural equations are autonomous mechanisms represented by deterministic functions mapping inputs to outputs by autonomy we mean as did frisch that these relationships remain invariant under external manipulations of their arguments the paper would be improved by making clearer when concepts were first identified and by who the empirical evidence provided for the claims in the paper is relatively modest the simulated results provided in table 1 shows only very small differences in l2 errors among variants of the authors proposed methods and more substantial improvements over icp and erm in three of four cases the discussion of these results is excellent the results on the colored mnist data show the expected results however good performance on simulated data and only a single real data set is still relatively weak evidence for the claims made in the paper the paper would be improved by increasing the number of real data sets used for evaluation references heckman j pinto r 2015 causal analysis after haavelmo econometric theory 311115151docsepin this paper the authors propose a gradientbased learning framework with a two part objective function in which one part improves the informativeness about the target variable and the other part enforces the invariance of the relation the second part is based on the icm principle and increases the stability and renders domain generalization possible the paper is well written and for the most part is easy to follow we should note that the icm principle is only usable if we have no hidden confounders ie causal sufficiency in the system the authors should clarify that causal sufficiency is an important assumption early in the manuscript and should clarify what will happen to the results if it is violated in general the assumptions in this work are very strong and i do not believe they will hold in reality specially regarding assumption 2 if we are assuming some of the causal mechanisms are changing across environments why the one corresponding to the target should not change although the assumptions are strong same assumptions were considered in few other works such as peters et al 2016 compares to existing work with the same assumptions this paper provides a good implementation method that is an improvement over past work and would be of interest to the iclr community the authors also discuss the conditions under which the recovered stable relations correspond to the true causal mechanisms the use of icm for causal discovery is also extensively studied in the nonparametric case in huang et al causal discovery from heterogeneousnonstationary data and in the linear case in ghassami et al multidomain causal structure learning in linear systems the definition of dointervention in page 3 is not standard what is referred to as do intervention in this paper is usually referred to as hard intervention in the literature and what is referred to as hard intervention in this paper is usually referred to as atomic intervention in the literaturedocsepthe paper is wellmotivated and studies and important topic but unfortunately it is let down by the presentation of their contributions which is confusing and at times misleading first a more minor complaint which i put here because its a source of confusion for the rest of the review the normalizing flow section is confusing because the mapping between the base distribution and y isnt clear i normally think of a normalizing flow as a map from some base distribution u to some target y such that y tu and py pudet jt u where ut1y adding conditioning as required to make a conditional flow this paper uses ty hx everywhere which i think is referring to t1y hx because we normally think of t as acting on the base distribution u and t1 as acting on the target variable my review assumes that i should read t as a map from y u but thats a little weird and should be explained explicitly more seriously i dont understand why lemma 1 isnt trivial by the data processing inequality any transformation of x can only lose information about x so if the identity function is among the set of feature extractors then h includes it because it maximizes ihx y the fact that h is independent of the flows latent variable trivially follows from the fact that choosing the identity is always optimal of course things get more complex if there is some constraint on h such that the identity isnt included but this isnt discussed on a second reading i think that this constraint is meant to come from the y perp e hx condition in section 4 but how this condition interacts with lemma 1 needs to be clearer the presentation of the method in section 4 also needs work the domain generalization problem is presented as the problem of finding h that maximizes the mutual information between y and hx in the worst case environment under the constraint that yindep e hx as far as i can tell the independence constraint is the important part of that objective under that constraint it is not clear why i wouldnt want to maximize the average mutual information or some other objective similarly its not clear why theorem 1 is useful until we get to equation 5 and it took me a couple of reads to realize that this is actually the important step on its own it just essentially says that if we have conditional independence then applying a 11 function maintains that conditional independence having gotten to this point in this review i think that many of my issues would be resolved if the presentation order was reversed the key condition you need is y perp e hx the paper would be far easier to follow by making it clear that is is the condition you need explaining both why we cant optimize for it directly and why this particular normalizing flow approach gives an indirect approach to achieving the condition in the current order of presentation which leads with a discussion of normalizing flows we are presented with theoretical results about flows which in isolation seem trivial the experiments show the method shows promise though they should report both irm rex kruger et al 2020s performance for coloured mnist to make it clear that there are better methods on that dataset kruger et al 2020 outofdistribution generalization via risk extrapolation ### Summary:
this paper proposes a new framework for improving supervised learning via invariant mechanisms the reviewers agree that overall this paper is wellwritten and contributes to a growing body of work on invariant prediction and causality in supervised learning at the same time there are some concerns regarding novelty and significance in light of previous work as well as the overall organization of the paper which could be improved to highlight the main contributions more clearly ultimately this was a borderline decision but it is clear that the paper needs a major revision before acceptance although the authors have already incorporated some of the minor comments which is appreciated the authors are urged to consider the major comments eg see r2s comments regarding presentation when revising the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: pros clustering sequence vectors is a practical and useful problem some of the business usecases described in the paper are indeed useful and relevant for analytics in healthcare and retail cons the paper is poorly written there are numerous typos and grammatical errors throughout the paper the ideas are not presented coherently the writing needs to improve quite a bit to get accepted at a conference like iclr description of related literature is done very poorly the generative model described clearly lacks justification the model is not described concretely either there is no clear description of the inference techniques used empirical results are weak docsepthe problem formulation at the bottom of page 3 correspond to what a bag of words preprocessing of a document would provide and in this the clustering would be a much simpler solution that just doing lda the paper has zero interestdocsepthis paper propose a hierarchical bayesian model to cluster sparse sequences data the observations are modeled as poisson distributions whose rate parameter lambdai is written as the summation of lambdaik a gamma distribution with rate equal to the mixture proportion alphaik the model is implemented in pystan experimental results on a realworld user visit dataset were presented the format of this paper including the listing in the introduction section the long url in section 23 and the model specification in section 32 can be improved in particular the presentation of the model would be more clear if the graphical model can be specified the motivation of choosing the observation model and priors is not clear in section 3 the author described the details of model specification without explaining why those design choices were appropriate for modeling sparse sequence data experimental results on a realworld dataset is presented however to demonstrate how the model works it would be best to add synthetic experiments as sanity check results using common baseline approaches should also be presented the results should also be properly quantified in order to compare the relative advantage of different approachesdocsepthe paper is very poorly written it is hard to understand what the real contribution is in this paper the connection of the model with hmm is not clear the literature review has to be rewritten to the reader it sounds that the authors are confused with the fundamentals itself mixture model bayesian models inference mixture models can be based on any of the exponential family distributions gaussian just happens to be the most commonly used again if this is a bayesian model why are clusters not inferred the authors further mention that in their pystan implementation k clusters were spun too quick what was the k used here was it set to a very large value or just 3 did the authors eventually use the truncated infinite mixture model in pystan the authors mention their model is conceptually similar to em but then end up using nuts why is a url given in section 23 instead of being given in the references provide a plate model describing section 32docsepthe paper discusses clustering sparse sequences using some mixture model it discusses results about clustering data obtained from a restaurant loyalty program it is not clear to me what the research contribution of the paper is what i see is that some known techniques were used to cluster the loyalty program data and some properties of the experiments conducted noted down no comparisons are made i am not sure what to evaluate in this paper ### Summary:
all reviewers agree to reject while there were many positive points to this work reviewers believed that it was not yet ready for acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: 1 the paper provides an interesting solution to achieve better performance in both id and ood settings 2 the paper is wellwritten and easy to follow 1 the motivations of the assumptions definitions are not clearly clarified 2 the theories provide limited insight for designing a better ood approach 3 more baselines should be added i have several concerns about both theories and experiments 1 the motivations of the assumptions definitions are not clearly clarified for assumption 41 i do not understand the relationship between the conditional independence constraint and the claim ftextstd relies on the spurious features while ftextrob relies on the robust features in addition i think ftextstd should rely on both the spurious features and robust features moreover a more detailed analysis of the related works should be provided for definition 41 42 and 43 the relationships between the mathematics forms and the motivation are also vague detailed analysis with concrete examples would help the authors develop the theories under the classbalanced assumption which is strong in practice although the authors claim that they provide the general setting in appendix a i do not find any texts about this 2 the theories provide limited insight for designing a better ood approach the paper assumes the availability of a robust model ftextrob and aims to ensemble it with the id approach in practice we often need to train a robust model from the training data only moreover we can not verify which kind of distribution shift could take place as a result we can not guarantee the effectiveness of the proposed method in realworld scenarios 3 there are many ood approaches and the authors should compare the results with them to name a few 1 2 some minor issues 1 t appears in the wrong place in both equation 31 and equation 32 2 there are many empty references marked as in the paper 1 liu evan z et al just train twice improving group robustness without training group information international conference on machine learning pmlr 2021 2 nam junhyun et al learning from failure debiasing classifier from biased classifier advances in neural information processing systems 33 2020 2067320684 docsepintuitively it makes sense to me that since robust models and standard models could rely on different sets of features ensemble them could make a better model the method is easy to conduct and it performs well also the paper is clearly written and lots of theoretical and experimental results are shown ablation studies are conducted while i like the idea of this paper my main concerns lie in how practical are the assumption made in this paper and how would the conclusion change if those assumptions do not hold such as what if the class balance assumption doesnt hold while assumption 41 is weaker than prior works what if it doesnt hold would these affect the conclusion made in the paper reading the intuition of why using calibration from the paper it seems to me that this is because a simple ensemble method is usedfstdfrob but would another more wellfitted ensemble method make the calibration step not necessary related work there is previous work discussing the relation between calibration and outofdomain generalization although it differs from this paper in that it is in the multidomain setting while this paper has one domain in training on calibration and outofdomain generalization 2022 minor typo on page 56 several citations related to lightweight finetuning seems not working would be good to have the missing variance results on some of the datasets especially on one of the anticorrelated one completed in table 23 docsepthe experimental results are great and the theoretical support is clean and intuitive a second lesson here to me here is that the success of the proposed method also says something about the kind of real world shifts that exist their method does not improve ood if realworld shifts are adversarial like in the anticorrelated setting as the authors point out im not going to argue against their results which are extensive and good my only concerns are with the theoretical results which are nice to and intuitive but seem to rely on weird assumptions i think a few clarifications are warranted 1 in prop 41 it is very weird that fens is better id i dont think this is possible if fstd is trained on id data to maximize performance the reason here is that when fstd learns the id p y x then fstd is a sufficient statistics and pid y fstd x pid y fstd meaning that fstd should be strictly better than fens and frob can the authors clarify why this is not the case 2 out of the three shifts considered i am most sold on the anticorrelated one this is also the case where the method should not be expected to work this is also the case where you cannot mitigate tradeoff nor should you try to this seems to limit the applicability of the method because im not sure how we would know where to apply the method would i be able to choose whether to apply the method or now without access to labelled test data 3 in the missing spurious features assumption im not sure whether fstd would be zero wouldnt latching onto the shaggy mane mean fstd would say predict the other class the suppressed spurious features assumption also seems to have the same problem where less prevalence does not mean the conditional has lesser predictive probability for the max probability class why would frob get affected by less prevalent countryspecific features docsepthe proposed method seems simple and effective and the paper communicates it clearly the paper clearly defines the proposed problem the proposed solution and provides intuition for why the observed improvements could occur it furthermore benchmarks the proposed approach on a wide variety of datasets and settings the paper is well written and provides good intuition as to how observed effects might come to be the experimental evaluation is well organized the research question and operationalization in the benchmark are clear and well thought out the benchmark is sufficiently large and covers a variety of datasets and scenarios the detail of steps given in the different proofs is to be complimented they are mostly easy to follow due to detailed steps and provided comments it is not clear to me why only parts of the wilds benchmark are used instead of all datasets the compared baselines only include the standard model and the robust model it did not become clear from the text to me whether the baselines are the calibrated models or the uncalibrated models this might provide an additional baseline and point of reference the paper does not provide any code that would enable reproducibility or a deeper investigation of the results this is problematic in so far as results seem almost a little too good furthermore some details on the experimental setup are missing number of replications hyperparameters which might be needed for replication the theoretical work shows proofs for a special case of the setting ie balanced classes and assumption of orthogonality 41 this should perhaps be discussed more thoroughly proofs of prop 41 assume that r and s are orthogonal which coincides with assumption 41 i am not sure this assumption generally holds so additional investigation regarding this might provide interesting insights minor points i think eq 21 is unnecessary and does not generally hold in practice as errors depend on the data assume eg small n in one domain and magnitude of shifts typos sec 41 calibrate fstd and frob id sec 42 on ood sec 62 citation of kaggle competitions is sec 5 use fine tuning ### Summary:
meta review the reviewers found the paper to be of highquality the idea is simple to implement yet novel and insightful the experimental evaluation is solid and the results are strong the paper is wellwritten and gives good intuition about the results and their interpretation during the review and discussion phases several questions and clarifications were made and some additional experiments were promised by the authors i trust these will be incorporated into the final accepted version of the paper some of it possibly in the supplemental materials
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the barren plateau phenomenon is a vanishing gradient effect that arises in sufficiently randomly initialized parameterized quantum circuits specifically the norm of the gradient falls exponentially with the number of quantum registers in the circuit while not a problem for classical neural networks due to efficient gradient estimation procedures gradients for parameterized quantum circuits are obtained by statistical sampling estimating small gradients therefore adds an exponential overhead eliminating most possible computational advantages in recent years there has been an effor to handle this problem by suggesting initializations that are not fully random on the space of circuits this paper takes the following approach a practically reasonable architecture is chosen and the parameters are initialized from a gaussian distribution the main technical message is a proof that if the variance of the normal distributions is chosen as 1l where l is the number of layers the gradient decays polynomially with the number of qubits n and layers l models with the propsoed initialization are evaluated on a variational quantum eigensolver setup to find the ground state of the heisenberg model and lih hamiltonian on the considered examples gaussian initialization appears to outperform the setting where parameters are initialized uniformly the main contribution of the paper is the lower bound on the gradient this proof is very nontrivial and involves techniques and intermediate results that i think may be of interest when analyzing properties of gaussian initializations in quantum circuits in general i have not checked every statement but i am overall convinced of correctness of the proofs on a technical level i think the contribution of the paper is solid the experimental results also line up with the conclusions drawn indicating that the phenomenon described may have applicability beyond the theorem setting i have two concerns firstly due the gaussian initialization i am not sure that the proof in the paper suffices to say that the gradients are lower bounded throughout training since the distribution has morphed from that at initialization note that this was not an issue for the original derivation of barren plateau as haar distributions are invariant under the shifts induced by training the second is that the restriction to gaussians with deviation decaying as 1l essentially restricts the initialization to a constant neighborhood of the identity this assumption seems to put more of a bias on the initialization than most existing approaches and may create the possibility of adversarial problems where convergence is heavily slowed experimentally the advantage over initializing the parameters to zero seems like an artifact of their being a stationary point at identity and the addition of some noise to perturb the initial state in the noisy simulations seems to remove most observed advantage for the proposed scheme yes docsepvariational quantum circuits are parametrized models that can be trained to perform mappings using gradient descent methods in this paper the authors propose an initialisation strategy to avoid the problem of vanishing gradients that occur when the number of qubits and the circuit depth grow strengths wellwritten new initialisation several initialisation strategies have been studied for classical neural networks but few existing work extend these results to the quantum case in this work the authors apply the gaussian initialisation strategy to variational quantum circuits and study how it may affect the training procedure by providing a theoretical and experimental analysis theoretical analysis the authors start by describing the gaussian initialisation technique and provide theoretical guarantees in different settings the first setting corresponds to the case when the circuit architecture is made using trainable 1qubit gates and the output is projected using local observables these results are then extended to the global observable case and 2qubit gates experimental analysis the authors apply their technique to two quantum machine learning problems where they perform numerical simulations to study experimentally the training behaviour of the parameters weaknesses the zero initialisation strategy seems to be fine for the performed experiments no docsepthis paper introduces a new initialization strategy for quantum variational circuits this gaussian initialization strategy is shown to exponentially increase the upper bound on the gradient with substantial implications for addressing optimization concerns of medium to large scale quantum machine learning models two empirical examples are provided which show this initialization strategy demonstrates an improvement in performance note after author responses score 6 7 pros this work provides a very important improvement in gradient bounds given the scale of concerns for optimising qvcs this is a very exciting result the results for global observables are especially interesting since previous works have focused on the benefits of local observables to trainability the paper is generally well written and conveys the point effectively circuit diagrams are well done and add to the understandability code is provided in supplementary material which greatly improves replicability and experimental verification the survey of related work is both useful and extensive cons the biggest problem is the empirical results although these experiments are just examples and the main result is the theoretical proofs they dont add as much as they could using shot noise instead of added measurement noise would improve the realism it would also be beneficial to add examples with more realistic circuit noise eg depolarizing channels additionally showing the gradient norm in figure 3 like in figure 2 would be beneficial citations could be condensed eg quantum simulations 14 15 16 17 18 19 20 21 22 23 quantum simulations 1423 empirical comparisons to other initialization strategies would be beneficial eg block initialization the horizontal lines on the graphs dont aid interpretability the authors sufficiently addressed the potential negative societal impact of the work docsepthe authors demonstrate that over a gaussian prior of appropriate width the second moment of the derivatives of a class of variational quantum algorithms vqas is only polynomially small in the problem size n and circuit depth l for constant problem locality s which is exponentially larger than the traditional barren plateau bounds taken uniformly over parameters the authors also give a bound when the locality s grows with the problem size that lower bounds the expected derivative second moment by a finite fraction of the initial squared derivative the authors then demonstrate numerically that their initialization scheme gives much better optimization performance than uniform initialization in a variety of vqa tasks the introduced bounds are novel and though i did not check the proofs in complete detail the authors technical methods and proofs seem correct i also enjoyed that these results are essentially a more rigorous understanding of the intuitive fact that training vqas while near the identity should be similar to lowdepth vqas and not experience barren plateaus i think a couple of weaknesses of the paper though are maybe claiming too much from the shown results first in their discussion of theorem 42 and corollary 43 the authors imply that their results are enough to show that gaussian initialization completely absolves vqas with global cost functions from barren plateaus however these results only lower bound the second moment of the derivative with a finite fraction of the initial square of the derivative this initial derivative can be very small see for instance the global cost function warmup example in cost function dependent barren plateaus in shallow parametrized quantum circuits cerezo et al 2021 where this initial derivative is zero giving a trivial bound i would have enjoyed more discussion or examples arguing that this bound is typically only polynomially not superpolynomially small second the authors results including now also theorem 41 still rely on assumptions in the training of the model namely once training is far away from the initial vecthetavec0 it is no longer wellapproximated by a gaussian of polynomially small variance in fact for a number of parameters growing superlogarithmically with n roughly the volume of allowed region where these results are expected to hold is superpolynomially small in the volume of parameter space polynomially small in diameter this that is superlogarithmically large depth is the regime previous barren plateau results kick in ie when averaged uniformly over parameters and i suspect they may be related i recommend the authors make this limitation of their work more clear these limitations aside i still find the work a nice rigorous interpretation of a common approach to circumventing barren plateaus when one has a good guess for where in parameter space the optimum is say near vecthetavec0 and expect optimization to stay within this region barren plateaus may be avoided i previously discussed what limitations of the work i believe should be more explicitly mentioned by the authors namely i suggest tempering the claims that gaussian initialization solves all instances of barren plateaus and providing more examples or intuition as to situations where one might expect the implicit assumptions on training ie staying near vecthetavec0 to hold ### Summary:
the authors propose a new random initialization of quantum neural networks which could avoid generating vanishing gradients specifically the new random gaussian initialization scheme will depend on the shape of the ansatz so that the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase this finding is also supported by the associated empirical study the reviewers consider this an important step toward the understanding of the trainability of variational quantum circuits however some limitations of the proposal are also discussed in the reviews and we hope the authors can make an explicit discussion of these limitations in the final version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this contribution describes a novel approach for implanted brainmachine interface in order to address calibration problem and covariate shift a latent representation is extracted from seeg signals and is the input of a ltsm trained to predict muscle activity to mitigate the variation of neural activities across days the authors compare a cca approach a kullbackleibler divergence minimization and a novel adversarial approach called adan the authors evaluate their approach on 16days recording of neurons from the motor cortex of rhesus monkey along with emg recording of corresponding the arm and hand the results show that the domain adaptation from the first recording is best handled with the proposed adversarial scheme compared to ccabased and klbased approaches the adan scheme is able to significantly improve the emg prediction requiring a relatively small calibration dataset the individual variability in daytoday brain signal is difficult to harness and this work offers an interesting approach to address this problem the contributions are well described the limitation of cca and kl are convincing and are supported by the experimental results the important work on the figure help to provide a good understanding of the benefit of this approach some parts could be improved the results of fig 2b to investigate the role of latent variables extracted from the trained autoencoder are not clear the simultaneous training could be better explained as the authors claimed that their method allows to make an unsupervised alignment neural recording independently of the task an experiment on another dataset could enforce this claimdocsephere the authors define a bmi that uses an autoencoder lstm emg the authors then address the problem of data drift in bmi and describe a number of domain adaptation algorithms from simple cca to more complex adan to help ameliorate it there are a lot of extremely interesting ideas in this paper but the paper is not particularly well written and the overall effect to me was confusion what problem is being solved here are we describing using latent variables ae approach for bmi are we discussing domain adaptation ie handling the nonstationarity that so plagues bmi and array data clearly the issue of stability is being addressed but how a number of different approaches are described from creating a preexecution calibration routine whereby trials on the given day are used to calibrate to an already trained bmi eg required for cca to putting data into an adversarial network trained on data from earlier days are we instead attempting to show that a single bmi can be used across multiple days this paper is extremely interesting but suffers from lack of focus rigor and clarity focus ae to rnn to emg is that the idea to compare vs domain adaptation via ccakldmadam of course a paper can explore multiple ideas but in this case the comparisons and controls for both are not adequate rigor what are meaningful comparisons for all for the ae and da portions the ae part is strongly related to either to kao 2017 or pandarinath 2018 but nothing like that is compared the domain adaptation part evokes data augmentation strategies of sussillo 2016 but that is not compared if i were reviewing this manuscript for a biological journal a rigorous standard would be online bmi results in two animals is there a reason why this isnt the standard for iclr is the idea that nonbiological journals conferences are adequate to vet new ideas before really putting them to the test in a biological journal the manuscript is concerned with the vexing problem of bmi stability of time which seems to be a problem where online testing in two animals would be critical i appreciate this is a broader topic relevant to the bmi field beyond just this paper but it would be helpful to get some thinking on this in the rebuttal clarity this paper needs to be pretty seriously clarified the mathematical notation is not adequate to the job nor is the motivation for the varied methodology i cannot tell if the subscript is for time or for day also what is the difference between z0 vs z0 i do not know what exactly is going into the ae or the adan the neural networks are not described to a point where one could reproduce this work the notation for handling time is inadequate eg despite repeated readings i cannot tell how time is handled in the autoencoder eg nxt is vectorized vs feeding nsized vector one time step at a time questions what is the point of the latent representation in the ae if it is just fed to an lstm is it to compare to not using it page 3 how precisely is time handled in the ae if time is just vectorized how can one get realtime readouts in general there is not enough detail to understand what is implemented in the ae if only one time slice is entered into ae then it seems clear ae wont be very good because one desires latent representation of the dynamics not single time slices how big is the lstm used to generate the emg it seems like a the most relevant baseline is to compare to the data perturbation strategies in sussillo 2016 if you have an lstm already up and running to predict emg this seems very doable page 4 we then use an adan to align either the distribution of latent variables or the distributions of the residuals of the reconstructed neural data the latter a proxy for the alignment of the neural latent variables this sentence is not adequate to explain the concepts of the various distributions the residuals of reconstructed neural data where do the residuals come from and why is one a proxy for the other please expand this sentence into a few sentences if necessary to define these concepts for the naive reader page 5 what parameters are minimized in equation 2 please expand the top sentence of page 5 page 6 top in contrast when the emg predictor is trained simultaneously with the ae do you mean there is again a loss function defined by both emg prediction and ae and summed and then backprop is used to train both in an endtoend fashion please clarify page 8 how do the ae results and architecture fit into the emg reconstruction bmi results is that all decoding results are first put through the ae lstm emg pipeline ie your bmi is neural data ae lstm emg if so then how does the adan cca and kldm fit in you first run those three da algorithms and then pipe it through the bmi page 8 how can you say that the bmi improvement of 6 is meaningful to the bmi user if you did not test the bmi online docsepthe paper considers invasive bmis and studies various ways to avoid daily recalibration due to changes in the brain signals while i like the paper and studied methods using adverserial domain adaptation is interesting to use in this context i think that the authors oversell a bit the problem of nonstationarity rsp stability is an old one in noninvasive bcis shenoy et al jne 2006 was among the first and a large number of prior methods have been defined to robustify feature spaces to project to stable subspaces etc clearly no gans at that time the least the authors could do is to make reference to this literature some methods may even apply also for the invasive data of the paper while the authors did not clearly say that they present an offline analysis one method the gan gets 6 better results then the competitors i am not sure whether this is practically relevant in an online setting but this needs to be clearly discussed in the paper and put into perspective to avoid wrong impression only an online study would be convincing overall i think the paper could be accepted the experiments are nice the data is interesting if it is appropriately toned down avoiding statements about having done something for the first time and properly references to prior work are given it is an interesting application domain i additionally recommend releasing the data upon acceptance ### Summary:
bmis need perpatient and persession calibration and this paper seeks to amend that using vaes and rnns it relates seeg to semg in principle a tenyear old approach but do so using a novel adversarial approach that seems to work the reviewers agree the approach is nice the statements in the paper are too strong but publication is recommended clinical evaluation is an important next step
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors present a method for unsupervised alignment of word across multiple languages in particular they extend an existing unsupervised bilingual alignment to the case of multiple languages by adding constraints to the optimization problem the main aim is to ensure that the embeddings can now be composed and the performance alignment quality does not degrade across multiple compositions strengths very clearly written a nice overview of existing methods and correct positioning of the authors contributions in the context of these works a good experimental setup involving multiple languages weaknesses i am not sure how to interpret the results in table 2 and table 3 see questions below questions on page 7 you have mentioned that this setting is unfair to the mst baseline since can you please elaborate on this i am not sure i understand this correctly regarding results in table 2 and 3 it seems that there is a tradeoff while adding constraints which results in poor bilingual translation quality i am not sure is this is acceptable i understand that your goal is to do indirect translation but does that mean we should ignore direct translation in table 3 can you report both wproc and wproc results is it possible that the gwinitialization helps bilingual translation as the performance of wproc is clearly better than wproc in table 2 however could it be the case that this somehow affects the performance in the indirect translation case imo this is worth confirming in table 3 you are reporting average accuracies across and within families i would like to see the numbers for all language pairs independently this is important because when you consider the average it is quite likely that for some language pair the numbers were much higher which tilts the average in favor of some approach also looking at the individual numbers will help us get some insights into the behavior across language pairs in the motivation figure 1 it was mentioned that compositions can be done and are often desirable along longer paths enfrruit however in the final experiments the composition is only along a triplet xeny is that correct or did i misinterpret the results if so can you report the results when the number of compositions increases docsepthis paper is concerned with the idea of inducing multilingual word embeddings ie word vector spaces where words from more than two languages are represented in an unsupervised way using a mappingbased approach the main novelty of the work is a method inspired by recent work of nakashole and flauger and building on the unsupervised bilingual framework of grave et al which aims at bypassing the straightforward idea of independently mapping n1 vector spaces to the nth pivot space by adding constraints to ensure that the learned mappings can be composed btw it is not clear from the abstract what this means exactly in summary this is an interesting paper but my impression is that it needs more work to distinguish itself from prior work and stress the contribution more clearly although 11 languages are used in evaluation the authors still limit the evaluation only to arguably very similar languages all languages are indoeuropean and there are no outliers distant languages or languages from other families at all not even the usual suspects like finnish and hungarian given the observed instability of ganbased unsupervised bilingual embedding learning dissected in sogaard et als paper acl 2018 and also touched upon in the work of artetxe et al acl 2018 one of the critical questions for this work should also be is the proposed method stable what are the instability criteria when does the method fail and can it lead to suboptimal solutions what is the decrease in performance when moving to a more distant language like finnish hungarian or turkish is the method more robust than ganbased models all this has to be at least discussed in the paper another question is do we really want to go fully unsupervised given that even a light and cheap source of supervision eg shared numerals cognates can already result in more robust solutions see the work of artetxe et al acl 2017 acl 2018 vulic and korhonen acl 2016 or sogaard et al acl 2018 for some analyses on how the amount of bilingual supervision can yield more or less robust models is the proposed framework also applicable in weaklysupervised settings can such settings with weak supervision guarantee increased robustness and maybe even better performance i have to be convinced more strongly why do we need fully unsupervised multilingual models especially when evaluation is conducted only with resourcerich languages another straightforward question is can the proposed framework handle cases where there exists supervision for some language pairs while other pairs lack supervision how would the proposed framework adapt to such scenarios this might be an interesting point to discuss further in section 5 style and terminology it is not immediately clear what is meant by triplet constraints which is one of the central terms in the whole work it is also not immediately clear what is meant by composed mappings hyperalignment before section 4 etc there is also some confusion regarding the term alignment as it can define mappings between monolingual word embedding spaces as well as wordlevel linksalignments perhaps using mapping instead of alignment might make the description more clear in either case i suggest to clearly define the key concepts for the paper also the paper would contribute immensely from some running examples illustrating the main ideas and maybe an illustrative figure similar to the ones presented in eg conneau et als work or lample et als work the paper concerns word translation and crosslingual word embeddings and there isnt a single example that serves to clarify the main intuition and lead the reader through the paper the paper is perhaps too much focused on the technical execution of the idea to my own liking forgetting to motivate the bigger picture other the part on language tree prior to conclusion is not useful at all and does not contribute to the overall discussion this could be safely removed and the space in the paper should be used to additional comparisons with more baselines see above for some baselines the authors mention that their approach is relatively hard to scale only in their conclusion while algorithmic complexity remains one of the key questions related to this work i would like to see some quantitative time measurements related to the scaling problem and a more thorough explanation why the method is hard to scale the complexity and nonscalability of the method was one of my main concerns while reading the paper and i am puzzled to see some remarks on this aspect only at the very end of the paper going back to algorithmic complexity i think that this is a very important aspect of the method to discuss explicitly the authors should provide eg onotation complexity for the three variant models from figure 2 and help the reader understand pros and cons of each design also when it comes to their design complexity is the only reason to move from the star model to the hug model computational complexity this argument has to be stressed more strongly in the paper two very relevant papers have not be cited nor compared against the work of artetxe et al acl 2018 is an unsupervised bilingual word embedding model similar to the muse model of conneau et al iclr 2018 which seems more robust when applied on distant languages again going back to my previous comment i would like to see how well hug fares in such more challenging settings further a recent work of chen and cardie emnlp 2018 is a multilingual extension of the bilingual ganbased model of conneau et al given that the main goal of this work and chen and cardies work is the same obtaining multilingual word embeddings i wonder how the two approhowaches compare to each other another more general comment concerns the actual evaluation task as prior work it seems that the authors optimise and evaluate their embeddings solely on the intrinsic word translation task but if the main goal of this research is to boost downstream tasks in lowresource languages i would expect additional evaluation tasks beyond word translation to make the paper more complete and convincing the method relies on a wide spectrum of hyperparameters how are these hyperparameters set how sensitive is the method to different hparams configurations for instance why is the gromovwasserstein approach applied only to the first 2k vectors how are the learning rate and the batch size determined minor what is w in line 5 of algorithm 1 given the large number of symbols used in the paper maybe a table of symbols put somewhere at the beginning of the paper would make the paper easier and more pleasant to read i would also compare the work to another relevant supervised baseline the work from smith et al iclr 2017 this comparison might further strengthen the main claim of the paper that indirect translations can also be found without degrading performance in multilingual embedding spacesdocsepthis is a work regarding the alignment of word embedding for multiple languagesthough there are existing works similar to this one most of them are only considering a pair of two languages resulting in the composition issue mentioned in this work the authors proposed a way of using a regularization term to reduce such degraded accuracy and demonstrate the validity of the proposed algorithm via experiments i find the work to be interesting and well written several points that i want to bring up 1 the language tree at the end of section 5 is very interesting does it change if the initializationparameter is different 2 the matrix p in 1 is simply a standard permutation matrix i think the definitions are redundant 3 the experiment results are expected since the algorithms are designed for better composition quality an additional experiment eg classification of instances in multiple languages could further help demonstrate the strength of the proposed technic 4 how to choose the regularization parameter mu and whats the effect of mu 5 some written issues like the notation of orthogonal matrix set both mathcalo and mathbbo are used ### Summary:
this paper provides a simple and intuitive method for learning multilingual word embeddings that makes it possible to softly encourage the model to align the spaces of nonenglish language pairs the results are better than learning just pairwise embeddings with english the main remaining concern in my mind after the author response is that the method is less accurate empirically than chen and cardie 2018 i think however that given that these two works are largely contemporaneous the methods are appreciably different and the proposed method also has advantages with respect to speed that the paper here is still a reasonably candidate for acceptance at iclr however i would like to request that in the final version the authors feature chen and cardie 2018 more prominently in the introduction and discuss the theoretical and empirical differences between the two methods this will make sure that readers get the full picture of the two works and understand their relative differences and advantagesdisadvantages
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this paper the authors introduce a novel transformer or network architecture termed as focal modulation network focalnet deals with the problem of efficient longrange feature modeling different from windowwise selfattention swin and focal attention focalnet adaptively aggregates surrounding tokens from different levels of granularity this is novel and interesting extensive experiments on ms coco imagenet and ade20k demonstrate the effectiveness of the proposed method in summary the idea is interesting and the performance is promising i would like to recommend acceptance strengths 1 i enjoy reading this paper the writing and the presentation 2 the idea is interesting and novel considering different levels of granularity makes sense in image processing 3 the experimental results are promising demonstrating stateoftheart performance weaknesses 1 from l124l132 it is easy to understand the translation invariance and decoupled feature granularity but why inputdependency is considered as an advantage also spatial and channelspecific is achieved by depthwise operations and cannot be considered as an advantage 2 i like the idea of different levels of granularity but this can simply be considered as a multiscale depthwise convolution in the implementation fig 2c which limits the novelty multiscale multibranch always works and introduces no novelty 3 some important papers are missing not compared like convnext that only uses depthwise convolution as well and mixformer which can be considered as a strong baseline both are published in cvpr22 and released much earlier than the submission dealine the authors should cite and compare these baselines 1 liu zhuang et al a convnet for the 2020s proceedings of the ieeecvf conference on computer vision and pattern recognition 2022 2 chen qiang et al mixformer mixing features across windows and dimensions proceedings of the ieeecvf conference on computer vision and pattern recognition 2022 the authors adequately addressed the limitations and potential negative societal impact of their work docsepthis paper proposes focal modulation which uses context at multiple spatial scales from a stack of convolutions combined with gated aggregation to produce a modulation this modulates each query point through elementwise multiplication focal modulation is evaluate as a dropin replacement for selfattention the method is tested in several experiments against strong baselines and has a thorough ablation analysis update after author response thank you for the responses and further experiments to be clear i was mostly concerned with the kernel size experiment as a way to glean the variance of other results since you have provided direct experiments to address that concern i am more confident that this is a paper worthy of acceptance the paper proposes a novel method for improving computer vision architectures based on visual transformers focal modulation is a clever way to include expanded context at each layer while simultaneously removing expensive pairwise interaction terms present in selfattention the method is described clearly the diagrams are illustrative and the experimental evaluations and ablations are thorough this paper is a solid contribution to an important field of computer vision however i would like to see some better estimate of variance of results for example in figure 5 in the appendix the map results vary considerably depending on the setting of the kernel size there seems to be little relationship between the kernel size and the map but the map varies considerably 412 to 416 the limitations and societal impact are well addressed docsepthe paper propose a focal modulation module that is more effective and efficient for modeling token interactions their main contributions are proposing an efficient way to inputdependent longrange contextual interactions the authors conduct experiments on tasks of image classification detection and segmentation the experiments results show the sota performance the performance achieves sota on almost tasks the paper is wellwritten the authors conduct abundant ablation studies to validate the effectiveness of each design in the proposed method yes docsepthis paper proposes a focal modulation module to replace the attention module in transformers specifically this module contains hierarchical contextualization several layers of depthwise convolutions gated aggregation and elementwise modulation to fuse information from the token itself and the context actually the elementwise modulation can also be regarded as a gate mechanism thus this module sounds like a convolution gate mechanism this paper does extensive experiments on various vision tasks to show the advantages of the proposed modules strengths 1 the writing of this paper is clear and integrated 2 the experiments in this paper are extensive including several vision tasks and ablation studies it is convinced that focal net is better than swin and focal transformers neutrality focal modulation can be regarded as convolutions plus gate mechanisms the idea is ok not incremental but not good enough thus i write the idea aspect into neutrality weaknesses in ablation table 9 about the fusion between token itself and context it only shows the experiment that replaces multiplication with addition what about totally removing the query branch and moving the parameters and computation to other components yes ### Summary:
all the reviewers acknowledge that the paper is wellwritten novel and shows strong performance gain besides all the reviewers are satisfied with the authors response to the raised concerns ac doublechecks the paper reviews and response and finds that the paper is wellshaped and generally flawless ac recommends acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary the work extends an existing algorithm to train a convolutional neural network by selecting a subset of fixed random weights by 1 using multiple random values per weight 2 using a sampling procedure to select the particular random value for each weight if these networks are finetuned after randomvalue selection is performed they perform close to networks purely trained with sgd on cifar10 and mnist strong points interesting results that extend results on the lottery ticket hypothesis and random weight learning weak points the paper builds heavily on an existing algorithm but it does not cite it in the method section the paper makes many claims and fails to provide evidence for these claims the work is not clearly motivated it is unclear why this problem is interesting or the results insightful recommendation short while the results are interesting the paper is poorly motivated and does not conform to standards of scientific work i recommend rejecting this work recommendation long the method section appears to be independent work but it is the same as ramanujan et al 2019 extended with multinomial sampling while ramanujan et al 2019 are cited it is not cited in this section furthermore the paper claims that random initialized networks and trained networks perform the same but it does not lay out evidence or an argument for this otherwise it is unclear why this method is interesting i recommend rejecting this work comments for authors i think these are some good initial results that you have and you can work with that but right now the paper has many flaws that need to be ironed out before you make another attempt i do not think you can get this work accepted in this round and instead should try to learn as much as possible from the discussion for a resubmission i think fixing the claim references and so forth will be easy the hard question is why is your work interesting using multiple random values per weight can be seen as some form of quantization of weights why is your way of doing something similar to quantization more interesting than other forms of quantization please also consider if it is true that selecting fixed random weights is training or not clearly you are optimizing weights it does not matter if you do it with sgd an evolutionary strategy or your algorithm in the end you optimize weights to have particular values i would say that it should be considered training but if your method is considered just a different optimization process compared to sgd why is it interesting finding subnetworks like done in ramanujan et al 2019 is interesting because you have smaller trained networks but you do not have subnetworks it could be interesting if you do 1 a thorough analysis that yields some insights and make this an analysis paper or 2 try to get better performance by doing both optimizations of weights and selection of weights but this is very similar to wortsman et al 2019 some minor things equation 3 has an additional w h already contains the w figure 6 has an annotation error i believe the upper line is supposed to be the ps method ramanujan et al 2019 whats hidden in a randomly weighted neural network httpsarxivorgabs191113299 wortsman et al 2019 discovering neural wirings httpsarxivorgabs190600586 docsep summary the paper investigates a type of neural network in which one of k possible fixed weights is chosen in each neuronal connection the weights themselves are fixed and random but the scores that determine which of the weights is chosen are updated through backpropagation using a straightthrough estimator the accuracy and behavior of such networks is studied for small networks on small datasets mnist cifar quality clarity originality and significance the paper is wellwritten and easy to follow the main idea seems interesting at first sight and it is wellmotivated but after some consideration of related effects in neural networks the results do not seem very surprising see below i think a deeper exploration of the connection to other phenomena would be necessary to make this paper relevant to the conference eg to weight quantization to few bits per weight or to variational dropout the paper seems to not go beyond adhoc conclusions of the form that these peculiar networks perform competitively on challenging datasets which seems to be a bit of an overstatement to me the authors also claim that the trained networks might be useful for initialization but to really make this point strongly a comparison to other practical methods of datadriven initialization on larger datasets with larger architectures might be needed to convince the reader why do the results not seem surprising to me it is possible that i misunderstood the algorithm which we could clarify in the rebuttal period of course but in my understanding of the described approach the straightthrough estimation of the scores will lead to preferring the selection of larger or smaller weights where standard gradient descent training would lead to larger or smaller weights this is consistent with the distribution of the selected weights as shown in several figures with the observation that uniform initialization works better than normal initialization and with the observation that both gs and ps tend to prefer weights having large magnitudes as learning progresses it would also account for the observation of the similarity in error rates when the network has a sufficient number of weights to choose from pros interesting idea the paper is wellwritten cons of limited interest to the conference audience i believe not clear if there is practical relevance or potential to improve scientific understanding detailedminor comments the main bullet points at the end of the introduction were not fully substantiated in the paper in my opinion 1 i was not fully convinced of a performance equivalence between random initialization and training because the slot machines are effectively trained 2demonstrates that current networks can model challenging nonlinear mappings extremely well even with random weights is not 100 clear because the weights are a choice among a set of initially random weights but that choice is a result of training 3connects to recent observations this seems to happen mostly in the two sentences before 43 and there connection and statement are not entirely convincing to me we find a random 6 layer network that performs as well as a 6 layer trained network without any form of pruning here it seems to me that the slot machine after training is in fact not random because it was trained and that training exploited weightcorrelations that potentially span multiple layers the argument could be extended to regular training in the sense that regular training just picks out the random weights among all the random floating point numbers this is a bit exaggerated but i think it shows why the term random 6 layer network may be an overstatement after training why is k chosen from the set 2 8 64 128 it seems that some natural values in this sequence are missing or that a more loguniform spread would be more natural maybe there is a specific reason that is not obvious then it could be mentioned update after author replies and discussion i have updated the review score after reading the authors reply and revision of the paperdocsep summary this paper proposes a method to train a neural network by selecting a weight from a set of k randomly generated weights for each edge in the network each edge has a different set of random weights quality score is assinged to each of k random weights which determines the weight used in the forward calculation instead of optimizing weights directly the proposed method optimizes the quality scores with the straightthrough gradient estimator experimental results show that the neural network trained by the proposed method achieves high accuracy compared to random initialization even when k2 reasons for score overall i vote for rejecting the authors say in sec 3 that the goal is to construct nonsparse neural networks with completely random weights that achieve high accuracy however the model obtained by the proposed method is no longer a network with completely random weights because the authors optimize quality scores instead of original weights it is empirically shown that a neural network can achieve a high accuracy by properly selecting weights from a set of random weights prepared in advance however such a result is not so surprising from the viewpoint that the quality scores are optimized also this paper has few practical implications i would like to see if the network can still achieve a high accuracy when every edge has a common set of k random weights if this is the case the proposed method may lead to a network that is efficiently compressed minor concerns p1 a fixed a set of random weights a fixed set of random weights regarding the author responses i have updated my ratingdocsepupdate after authors response i want to thank the authors for their response and i am happy to see additional results on shared sets of weight values which allows to easily relate the work to methods for training lowbitwidth networks to further increase impact and significance of the work it would be necessary to really flesh out the advantage of the proposed method over other similar methods it is not too surprising that the method works but why would i prefer it over other methods nonetheless the paper presents novel empirical analysis that adds to the body of work on nonstandard training of neural networks to make a clear stance for the reviewer discussion i have therefore increased my score to 7 though i would rate the paper at the lower end of score7 papers summary the paper proposes a novel scheme to obtain well performing neural networks without classical adaptation of weights instead each connection can have one out of k randomly drawn values and training consists of a backpropagationbased procedure to find which value out of the k possible values to select for each weight the method can be interpreted as finding a highperforming subnetwork within a larger network of random weights however in contrast to previous methods that literally implement the latter the proposed method is computationally more efficient experiments are performed on a number of neural network architectures on mnist and cifar10 contributions novelty impact 1 proposal of a novel scheme for finding wellperforming networks without explicit training of weights this is interesting and adds to a growing body of recent work on alternatives to classical training of neural nets which is insightful for both developing better training algorithms but also understanding the nature of neural network training my concern is that the proposed method is conceptually very similar to previously known approaches pruning a larger network which is also discussed in the paper but also some methods for training lowbitweight networks such as 1 and 2 while the proposed method is an interesting alternative implementation the advantages compared to the other approaches are fairly limited accordingly the potential impact of the work might be somewhat limited as well im afraid 2 a nice and extensive set of ablations and control experiments as well as repetitions of experiments to establish statistical significance of the results the paper in particular the experimental part is well executed and the ablations and controls allow for being optimistic about the generality of the findings which has a positive influence on the potential impact of the work 3 the paper shows that networks obtained with the proposed scheme can also act as a good initialization for further finetuning leading to very well performing classifiers this process is also analyzed in terms of overall computational budget flops and in 2 of 3 cases shown compares favourably against standard neural network training in terms of impact this is another nice result to add but probably not strong enough to replace standard initialization anytime soon 1 binarized neural networks training deep neural networks with weights and activations constrained to 1 or 1 courbariaux et al 2016 2 xnornet imagenet classification using binary convolutional neural networks rastegari et al 2016 score and reasons for score im quite torn with this paper on one hand the method works well is thoroughly analysed and the paper is very well written and polished in many respects i would even say this is an exemplary paper on the other hand the paper suffers from only adding a quite simple variation on existing work particularly the work on training lowbitwidth networks 1 and 2 above as well as later extensions to more than singlebit weights is conceptually very similar the forward pass uses constrained weight values but gradient information is accumulated in a realvalued variable as the appendix notes the main idea could also be implemented simply via pruning in a larger network though less computationally efficient to further strengthen the paper it would be good if the paper could answer one or both of the following questions i how do the results contribute to understanding weight initialization and the training process what can the reader learn that wasnt known already ii what are the concrete advantages of the proposed method over previously proposed alternatives what does it do better what shortcomings does it address is it faster is training more stable i would love to give this paper a very high score because of the great execution and presentation but the lack of novel insights or clear methodological advantages makes this hard i am currently voting for a weak accept because the paper is very well written and experiments are thorough if the scoring was based on novelty alone im not sure that the paper would clear the bar for a toptier conference i am of course happy to reconsider my final verdict in light of the other reviews and authors response strengths 1 great presentation of the work a very well written paper and good main results 2 experiments are well executed multiple repetitions many controls and ablations that one wants to see to improve confidence in the generality of the findings weaknesses 1 little novelty the proposed algorithm is a nice idea but its not too surprising that it operates essentially on par with a previously proposed pruning method since the algorithm can even be conceptually recast as pruning in a larger network 2 the writing puts a lot of emphasis and focus on the distinctive features of the method understandably so given how close it is compared to other methods but i think its also fine to not start off in an almost defensive fashion and simply state that this is another possibility of implementing the same idea but with the following advantages and disadvantages correctness reasoning in the paper is sound experiments are well executed and many controlexperiments and ablations are shown clarity the paper is very well written results are nicely presented and related literature is discussed in a useful fashion improvements that would make me raise my score major issues 1 the main issue is a lack of novelty addressing either i how the paper adds new knowledge in light of the current body of literature or ii stating the precise advantages of the proposed method over alternatives would be crucial i dont see an obvious way for i but for ii a starting point could be to do more detailed comparison against other methods in particular ramanujan and see whether the proposed method compares favorably eg in terms of training stability of robustness wrt hyperparameters 2 another possibility to add novelty to the paper would be to focus on lowbitwidth training where instead of drawing k values for each weight separately values are reused per layer or even across the whole network typically first and last layers need to be treated differently ie they require more bitwidth reliable and robust methods to obtain eg 2 4 or 8bit networks is a timely and important topic and the proposed method has potential to contribute to that body of work as well though it is admittedly a bit of a deviation from the current story and main focus of the paper i want to add this as a suggestion to the authors it could work but it could also severely reduce focus and clarity of the paper not necessarily as an improvement that id expect to see minor comments a the iclr header seems to be missing in the pdf file b in the probabilistic version of the method it might be worth experimenting with some annealing schedule where the randomness of the method is gradually reduced over training eg via reducing a gumbel softmax temperature making the magnitude of the scores very large has essentially the same effect but is less controllable since it has to be indirectly influenced via the learning rate i would expect convergencetesttime performance of the method to benefit from such a schedule and perhaps even help close the gap to greedy selection c is the supplementary material supposed to go into a separate document did not doublecheck the submission instructions d a bit of a nitpick the phrase that weights are never updated might suggest some miraculous phenomenon id rather say theres a set of weight values from which one value is probabilistically selected so if one considers the weight of a particular connection as a random variable across different forward passes then the expected value over that random variable changes smoothly as training progresses resembling a standard weight update process quite closely ### Summary:
the idea behind this paper is to develop a training algorithm that chooses among a fixed set of weights for each true weight in a neural network the results are reasonable though difficult to quantify as either good or surprising performance from the algorithm a perhaps interesting point is that additional finetuning from these found networks can in some cases best the accuracy of the original network the pros of this paper are that it is a neat original idea with the exception of the limited scale of the benchmarks ie the selected architectures the paper is largely wellexecuted the primary shortcoming of the paper as discussed by the reviewers is the lack of clarity in its implications specifically it is difficult to position the result as contributing to a practical aim or leading to additional future work based on the reviews and discussion my recommendation is reject in particular this paper would be significantly improved by bringing in a strong motivational context and therefore additional comparisons for example the context for the work of ramanujan et al 2019 is that perhaps it is possible to find subnetworks of large initialized networks that will permit more efficient training in appendix a this paper proposes that the technique here could be cast as pruning within a much larger network following results from zhu and gupta 1 and also ramanujan et al 2019 finding a sparse network within a larger network can produce a more accurate network than training a network of equivalent size to the sparse therefore these results could potentially be cast and as a more efficient way to perform the techniques of ramanujan et al 2019 alternatively the results that demonstrate that finetuning the identified networks improves performance over the standard network could be more robustly evaluated and perhaps cast as either an alternative training technique or leveraged as a technique like warm starting 2 this is a very interesting and promising direction it appears that the paper just needs a bit more distillation 1 to prune or not to prune exploring the efficacy of pruning for model compression michael zhu and suyog gupta in international conference on learning representations workshop track 2018 2 on warmstarting neural network training jordan t ash ryan p adams neurips 2020
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper aims to solve the problem of humanmachine interface with supervision or prior knowledge about the users desired tasks this is challenging due to the large space of interface designs this paper proposes a reinforcementbased interface update method by maximizing the mutual information between the users commands and the induced state transition where an underlying key idea is that the more intuitive the interface the less noisy the users commands the main contribution is the design of mutual information objectivereward for enhancing interfaces the experiment section also shows several studies to demonstrate the effectiveness of the updated interfaces and application examples the main strength of this paper is that the intuitive design of mutual information rewards considering the users influence on the next state and its opposite the major weakness of this paper is there is no comparison with baselines it is unclear whether the given datasetproblem is easy to design interface further it is still hard to clearly confirm the correlation between the mutual information and true reward over results particularly in fig 2 na docsepthis paper presents a method for building an adaptive user interface in an unsupervised manner the key idea is to use the mutual information between the user input and the state transition based on this idea the authors propose an algorithm to maximize the mutual information lower bound from a small amount of user operation data the proposed approach is evaluated using five existing dataset realworld user study data and another expert user demonstration strength the unsupervised user adaptation task setting is interesting and the idea of using mutual information content seems reasonable experimental evaluations have been conducted from multiple perspectives demonstrating at a minimum the effectiveness of the proposed method weaknesses while offline evaluation using existing data makes sense evaluating the actual use of adaptively changing interfaces with this approach is not necessarily sufficient the cursor control task seems a bit easier than the unknown interface that this study assumes multiple user experiments on diverse interfaces with similar or greater participants would increase the methods reliability the authors demonstration is somewhat unreliable as experimental evidence as for the methods claim that it does not require prior knowledge about the desired task it does not appear to be strictly substantiated as there are no corresponding experiments considering the purpose of the study what should be shown is the advantage of the system compared to the users solo adaptation to the system however the experiment mainly only compares the initial state and after adaptation and there is no discussion of what happens after the participants use the baseline interface for an extended period there is an experimental discussion on interfaces where the proposed method cannot be successfully applied which is a good point for the paper although the study involved numerous user experiments there was no specific mention of a process for obtaining consent from participants or for ethical review docsepthis paper presents a reinforcement learning algorithm that uses mutual information as a proxy reward for achieving understanding between a computer with a randomly perturbed ui and a human trying to accomplish a task using that ui without knowing how it is perturbed the only feedback signal to the algorithm is the entropy of the humans movementsinputs where it is assumed that the human would be making more randomchaotic inputs in a sense proportional to how unexpectedly unintuitively the games state changed with respect to the action taken by the user the main evaluation is with a space ship control game where the actual motion of the ship was perturbedoffset initially by some random theta degrees the algorithm eventually learns to reduce theta either to zero or 180 degrees so that it exactly follows the humans command only at these two stable points does the human stop acting so randomly in response to the state changes since the state changes are apparently and acceptable reaction to the action so further exploration by the human is not required as the authors state the key idea in this paper is that regardless of the task when an interface is more intuitive the users commands are less noisy they formalize this noise as the mutual information between the users command signals and the induced state transitions in the environment their work is in line with recent literature in user empowerment where the machine does not assume or infer the users goals but only uses cues that indicate the users satisfaction with the result as a reward upon which to learn in this humanintheloop system the key strength of the paper is that it is interesting and the idea that users will express frustration with a poorly performing interface by random frantic exploration of potential alternative commands that might work seems plausible and is supported by the authors first study that shows that their mutual information pseudoreward is correlated with interfaces that adapt to meet the users expectations become more intuitive as the authors state the key idea in this paper is that regardless of the task when an interface is more intuitive the users commands are less noisy the authors give reasonable proof of the fact that with adaptive interfaces users do seem to exhibit the behavior of taking more noisy actions when the resulting effects in the environment state changes are not what would be intuitively expected the authors do include a user study with a simple adaptive interface that shows that their mutual information pseudoreward can drive convergence in this case the authors also show that a knowledgeable user can drive adaptation using the pseudoreward as the author states there has been a number of attempts to find natural reward signals in the real world when interacting with humans whether these be through nlp bci eeg facial expressions and physiological signals none of these are as easy to obtain as the noise in the input that these authors leverage making this perhaps the most useable of all prior proposed methods in the field the weakness of the paper is that the use case is not immediately compelling the first contact scenario is a good framing for this however unlikely but in reality this is a more theoretical method for machines to learn preferences from humans the more compelling use case is probably in applying this as a feedback mechanism to search engines or to personalize the fit of something like exoskeletons or prosthetic limbs the user studies are not exhaustive but i would not expect that for a contribution that is fundamentally about the algorithm it is laudable that the authors disclosed that the singular subject in the feasibility study was the first author and while on the one hand it could be argued that having such an informed participant as an evaluator made the task easier for the machine potentially the author knew to proportionally vary the randomness of the responses in a way that would facilitate convergence it is not entirely to be dismissed especially in considering a case where a person was trying to fit or train a machine intentionally knowing how the machine learned through randomness the authors show that this is a naturalistic response in general so intensifying it intentionally would not be entirely out of line with user behaviour also i believe the term coadaptive in the title is a bit misleading as you do mention that the algorithm has no ability to adapt the behavior of the human i believe you are saying that the human naturally seems to adapt their behavior in response to the algorithms correctness and that you are relying on the existence of this adaptation otherwise you pseudo reward would not work but maybe clarifying this a bit would be good this contribution is limited to the space of adaptive interfaces where the correct settings for the users task cannot be known in advance but where the parameters of the controlling algorithm have the ability to adapt themselves to a more optimal setting the assumptions are reasonable but not exhaustively studied the process of adaptation is comparatively slow compared to other methods so a stronger motivating use case would have been better the method could have been more exhaustively compared versus other humanin the loop methods such as nlp or facial expression analysis or simply giving the user an explicit performance feedback channel on the algorithms performance eg 1 to 10 as a keyboard input and evaluated with respect to speed of convergence and user satisfaction eg if could take longer yet users might prefer it as more natural ### Summary:
the paper describes an approach to learning an adaptive user interface ie mapping raw inputs to the agents actions in an unsupervised way via reinforcement learning the goal is to learn interfaces that are intuitive for the user with the supposition that the users inputs become less noisy as the interface becomes more intuitive to that end the proposal is to use the mutual information between the raw input provided by the user and the resulting state transitions as a reward proxy the approach is evaluated on a series of control and typing domains as well as a smallscale user study involving a cursor control task the paper was reviewed by three researchers who read the author response an discussed the paper with the ac the reviewers agree that the problem of adapting a user interface in an unsupervised way is interesting and the proposed use of mutual information for adaptation is sensible and interesting the reviewers initially raised concerns about the absence of a compelling use case and the experimental evaluations notably the lack of appropriate baselines reviewer eraa and inadequate experiments reviewers eraa and phlr the authors made a concerted effort to address most of the reviewers concerns which included experiments conduced on the cursor domain using the alternative method suggested by reviewer eraa however the authors did not address the experimentation issues raised by reviewer phlr who finds that the paper lacks experimental evidence for some of the claims being made as it stands the paper doesnt show that the interface that can be achieved with this approach is truly intuitive making such a claim requires comparative experiments with appropriate baseline interfaces and more detailed user analyses as such a detailed set of user studies may outofscope for a conferencelength algorithms paper focused on the use of mutual information as a reward proxy for interface learning the claims in the paper should be revisited
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper introduces a framework to treat denoising diffusion probabilistic models ddpms as solving differential equations on manifolds this goal is faster sampling without significant loss of quality i thank the authors for this submission i believe there is value in this work both from a theoretical and practical perspective in general i am willing to accept the paper i have two main suggestions 1 please doublecheck the writing and notations some sentences are hard to understand and equations contain errors eg eq 2 hatalphat wrong index 2 as ddpms are relatively new i was expecting a bit more elaborate introduction 3 i was honestly quite disappointed with the presentation at times some equations are stated without any intuition if you are short of space i would rather move some of the experimental parts to a supplementary 4 sec 43 and in general i would also work with toy problems and easily visualizable data this will provide additional insight for the reader i am not very familiar with ddpms but as far as i can tell the numerical technique introduced makes sense and leads to improved results i therefore recommend acceptance docsepthis article studies pseudo numerical methods that improve and accelerate upon exiting numerical methods for denoising diffusion probabilistic models ddpms the crux of this work is the observation that existing work on this topic does not take into consideration the structure of the highdensity region of the data the beginning of the article section 1 and section 2 is well written and presents the motivation behind the introduction of these pseudo numerical methods in a very didactic way however section 3 needs a lot of polishing these are some comments and suggestions 1 the contribution of this work is based on the assumption that sigmat 0 the conditions required to satisfy this assumption deserve to be commented 2 the mention of the manifolds is very implicit and requires to be clarified what is the definition of the manifold defined at the beginning of section 32 3 the paragraph at the beginning of section 33 supposed to give the intuition behind the introduction of the transfer part defined in 11 is not clear and should be reformulated for example what do you mean by we find that equation 9 has the property that if is the precise noise in xt then the result of xtdelta is also precise no matter how big delta is 4 in algorithm 2 plms and prk were not defined 5 it would be preferable to put the theoretical results of section 36 in a clearly stated theorem this would highlight the theoretical contributions of this work the novelty of the methodology presented in this paper qualifies this work to be accepted to the conference conditionally to improve the clarity of section 3 docsepthe paper proposes a new efficient method for denoising diffusion probabilistic models ddpm generative models that optimize for the closest solution on a manifold based on the observation that this can be seen a solving a set of differential equations on a manifold this allows efficient pseudo numerical methods to be applied here which have many advantageous properties over classical optimization methods including less optimization steps and guaranteed manifold solutions due to separating the gradient part from the transfer part in the optimization results are shown on four datasets comparing to two reasonable but not sota baselines the paper introduces a class of pseudo numerical methods for ddpm this is based on a previous work that already establishes the relationship between ddpms and a certain class of differential equations on manifolds the pseudo numerical methods separate the gradient and transfer step of classical numerical algorithms and for each part choose the best of both worlds with this they are able to provide faster converengence and efficient update steps because gradients do not need to be recomputed for every step while i would not guarantee that methods do not exist already in classical optimization i have not seen them in any related application the paper is overall well written and the contribution and main ideas are explained clearly starting at section 33 to 4 the derivations become slightly confusing and it takes a lot of referencing back to find all the variable names again a legend for the variables and a bit more redundant explanation would probably help here the same goes for the derivations in the appendix which i could not follow completely experiments are done on four datasets with different resolutions using pretrained models for the manifold description the results show that the introduce pseudo numerical methods converge much faster and provide better results in less iterations than the previous ddim method and classical numerical methods the qualitative examples in the appendix look good but some of them are too smooth to compete with general sota generators i am not sure if the smoothness here is related to the pretrained models that were used minor comments the authors claim that their implementation is in the supplementary i was not able to find any supplementary except the appendix but i am honestly not sure if this is a failure of me using openreview the implementation should definitely be published in the end though i think it is bad practice to move the related work section in the appendix the large figures in the appendix are very hard to understand because the subfigures are not separately titled there are some grammatical errors throughout the text which should be proofread again the idea of separating the gradient and transfer part is to the best of my knowledge i am not an expert though novel and i can see many applications besides the ones proposed in this paper the shown results might not be exactly stateoftheart in terms of generating images but show clear advantages over traditional numerical methods therefore i recommend accept docsephighlighting the high computational complexity for sampling from denoising diffusion probabilistic models ddpms eg wrt gans authors build on the connection between diffusion processes and odes to propose efficient pseudonumerical methods so as to sample data from the data manifold the main idea is to combine the discrete update proposed in ddims with a fourthorder gradient estimation given by the rungekutta or linear multistep methods the motivation being that such gradient estimator should yield trajectories that stay closer to the data manifold they empirically assess their methods on cifar10 and celeba in terms of sample quality measured by fid and show that they get a x20 speedup wrt ddims or a significant improvement in fid with the same number of steps clarity overall i believe that the clarity of the submission should be enhanced first the introduction can be improved to better stress the motivation of the submission if i understand correctly this work builds on probability flows song et al 2020 which leverage the existence of a deterministic process whose trajectories have the same densities as the original diffusion process this deterministic process satisfies an ode that depends on the score original drift and diffusion terms but also on the score function consequently classical numerical ode solvers eg rk can be leveraged to sample data from the probabilistic model authors then state that results obtained via this approach are subpar and suggest that this is due to the solvers tendency to sample data far from the data manifold 1 why is this true would be necessary to give some intuition and to refer to a theoretical analysis 2 what is the precise problem is the issue that the model oversample the data distributions tail ie fails to fit the distribution properly or that the numerical methods fail in what sense in these areas as the score is undefined or hard to estimate smld song and ermon 2020s motivation to inject noise is built on the latter the authors then provide pseudo numerical methods for diffusion models pndms which produces trajectories that iterate data on the highdensity region of the data hence tackling the aforementioned issue also perhaps this is question of taste but i believe that the background section and most of the papers clarity could be greatly improved by taking the perspective of song et al 2020b that is a continuous diffusion process forward perturbating data and the associated reverse diffusion process generating data additionally section 31 is challenging to follow would perhaps be better to put less equations but spend more time explaining why and how they matter strengths first the proposed method is conceptually simple and showcase previously proposed methods ddims as a special case where the update is a firstorder euler step then the submission shows strong empirical results on common datasets like celeba with faster convergences or significantly better fid for the same number of steps yielding sota authors report a x20 speedup wrt ddim but this in number of steps and fourth order methods tradeoff convergence speed for computational cost figure 3 suggests that the speedup is around x15 for cifar which is still significant although it would ideal to report the runtime directly in table 2 finally figure 4 is quite nice as it empirically illustrate the proposed method ability to sample trajectories that like closer to the data manifold which was the original motivation weaknesses i think that the main weakness is the writing although i believe that section 43 would deserve a deeper empirical analysis as it is directly investigating the core motivation of this submission relation to prior work perhaps worth citing sohldickstein et al 2015 in section 1 citations for rungekutta and the linear multistep methods appear to be missing it is not entirely clear to me what method is meant by probability flows song et al 2020b is it with variance exploding sde smld or variance exploding sde smld cf table 1 from that paper additional feedback however classical numerical methods sauer 2017 have problems when they are applied to ddpms what class of numerical methods the euler and rk methods are ode solvers although extensions exist for sdes it is unclear how they can be applied to ddpms or is it implicitly implied that they are used for the the corresponding deterministic process probability flow to iterate our data on the highdensity region would suggest to reformulate eq 3 epsilontheta is not defined would advise to do so so for the paper to be selfcontained especially as epsilontheta is used through the entire paper table 2 error bars confidence intervals are missing bold is not defined as methods have different computational cost per step would be very useful to additionally show this metric figure 3 time has no unit figure 4 axis names are missing i personally find this submission interesting and significant yet believe that clarity needs to be improved to enable readers take the most of the papers insights ### Summary:
this paper presents a new ddpm model based on solving differential equations on a manifold the resulting numerics appear to be favorable with faster performance than past models most of the reviews thought the main result was of interest and were impressed with the performance reviewer c9by points out some challenging issues and analytical questions that remain unanswered in the text they also have some simpler textual revisions that seem less important in general this paper has the misfortune of receiving reviews whose confidence appears to be low while partially this is a byproduct of the noisy machine learning review system the difficulty of the text itself is substantial and made the paper less than approachable the authors are encouraged to continue to revise their text based on feedback from as many readers as possible that said the authors were quite responsive to reviewer comments during the rebuttal phase which significantly improved the text overall this is a borderline case and the ac also had some difficulty following details of this technically dense paper given the positive technical assessments of the work and at least one reviewer defending the papers clarity the ac is willing to give this paper the benefit of the doubt
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the proposed method has been evaluated on several benchmarks the idea is simple and the proposed method achieves a performance improvement on several benchmarks about the motivation the key motivation is unclear as described in the introduction or method sections the key motivation are existing mixbased methods cannot be combined with each other and most of existing mixbased methods cannot effectively combine more than 3 images it is very confused to me that why we need to combine the existing mixbased methods it is necessary to give a detailed discussion about the motivations theoretically the authors try to utilize the experimental results about mixup cutmix and mixup to show the necessity of the motivation to my knowledge the results are weak because miixp should perform better than mixup cutmix mixupcutmix gives a noise cutmix label for a mixup image input my major concerns are about the key motivations and the experiments of the paper as for the proposed stackmix method it has the highest performance when k is equal to 2 the results make me confused about the second motivation should we need to combined more than 2 images for data augmentation please explain for this phenomenon about the paper writing a lot of grammatical errors exist in the main manuscript and make the paper hard to follow for example page 1 there is work to reduce the cost of the search there are some works to reduce the cost of the search page 2 follow up workwith correspondingly weighted labels followup works label the image with correspondingly weighted labels page 3 setting including setting inclduing in section of related work the difference between the proposed stackmix and the previous mixbased augmentation method should be discussed in detail the size of the stacked image input is related to the number of k the authors claim that the proposed method do not change the general network architecture to my knowledge the size of the input can influence the structure of the network about the experiments the authors list all of the experiments settings in table 2 the authors have conducted the experiments on supervised learning setting for several datasets and networks but only on cifar dataset for other settings can the authors explain for this docsepthe proposed method is very simple and shows performance improvements across the board in the experiments when combined with other mixup based techniques the results show that it is competitive with cutmix on its own and boosts performance further when combined with it or mixup the compatibility with data augmentation is also evaluated results indicate an improvement with corrupted data and in the semisupervised learning case as well there is no theoretical explanation of why the proposed method helps to improve performance there is a lack of an attempt to motivate the proposed approach in general inference speed is reduced by applying the new technique it would be very useful to provide some form of justification for the proposed approach that is not just based on improved accuracy on the test set i can see that it may be difficult to provide theory justifying the proposed method but perhaps some intuitive justification could be provided docsep1 the method is simple and easy to implement 2 it can be used with other existing augmentation methods 3 the authors validated the method on multiple datasets showing performance gain 4 to provide a fair comparison the authors accounted for a different number of hyperparameters epochs etc showing that the effect of stackmix is nontrivial and potentially can not be explained by computation or model size differences weakness and suggestions 1 in stackmix each input is presented as a concatenation of two images and the label is the mean of the two onehot labels although the figure explains the axis of concatenation it will be better if you mention the axis of concatenation in the text as well 2 you should evaluate performance for unsupervisedssl methods which rely heavily on data augmentation techniques 3 section 36 is not very clear some of the experiments are not well explained and the intuition is not clear in section 36 therefore we designed several experiments the sentence seems incomplete designed experiments tofor questions 1 in section 35 for mixupand cutmix k represents the number of images combined how do you combine multiple images k2 for mixup 2 in section 36 onsrn18cifar10 we only swept over the top image in stackmix for the first convolutional layer could you explain more about the intuition for this step 3 in another case the standard onehot setup is given two forward passes for inference at test time why are twoforward passes given and how ### Summary:
meta review ac read the paper reviews and responses ac appreciates the simple and effective stackmix method that surpasses all existing baselines though the average rating is below the acceptance bar ac still recommends acceptance due to the comprehensive experimental results that may shed light on future research in the community however ac suggests that the authors do follow the negative comments especially from reviewer z7zb to improve the quality of the paper for publication
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper describes a new benchmark for comparing and contrasting deep learning knowledge tracing models this benchmark is motivated by the observation that the research community is busy creating a variety of models based on different assumptions and approaches some major and some minor and that the results on standard data sets vary in ways that make judging new models difficult the paper posits that data cleansing and other pipeline processes may explain the variability rather than factors intrinsic to the model being proposed the chief contribution is a toolkit called pykt which include python routines for standardizing data cleansing and data set preparation as well as attendant recommended procedures the benchmark was utilized to compare 5 different flavors of dlkt models across 7 different publicly available datasets the evaluation tasks included predicting the students response on the last question based on historical data and predicting multiple student responses in all cases the student response is a binary variable of either correct or incorrect the paper concludes with a series of observations on the performance of the selected models on these data sets using the proposed benchmark toolkit the chief finding is that the original dkt model still performs the best whatever one may say about the particulars of this benchmark the team is tackling a very important challenge within the research community how can we ensure that we are comparing applestoapples and that the results being reported are reproducible and meaningful the literature is filled with papers where someone tweaks a current model in some small way and reports a marginal gain in performance that is elusive to replicate the strength of this paper is the thoroughness of this first step towards creating a community benchmark the team did a terrific job reviewing the plethora of models out there and picking core representative examples along with selecting appropriate and available data sets the paper is framed around two wellchosen research questions that motivate the concluding observations the limitation of this work applies to all knowledge tracing approaches that reduce student learning assessment down to a binary value of correct and incorrect we know that these sorts of observations usually measure recall or recognition rather than deep comprehension by the students themselves contemporary approaches towards assessment will need to handle longer and more nuanced language and require much better nlp but all of the datasets that are being used are based on this binary response assumption the paper makes a big deal about label leakage but it could have been explained much more clearly docsepthis paper proposes a comprehensive pythonbased benchmark platform pykt to guarantee valid comparisons across dlkt methods via thorough evaluations the pykt library consists of a standardized set of integrated data preprocessing procedures on 7 popular datasets across different domains and 10 frequently compared dlkt model implementations for transparent experiments experimental results on the finegrained and rigorous empirical kt studies yield a set of observations and suggestions for effective dlkt the proposed toolkit is open source overall i believe this paper has made a good contribution to the knowledge tracing community although i am not an expert in this area i find this paper easy to follow 1 the proposed benchmark platedform pykt is novel and can guarantee valid comparisons across dlkt methods via thorough evaluations 2 experimental results on the finegrained and rigorous empirical kt studies yield a set of observations and suggestions for effective dlkt 3 this paper provides comprehensive experiments with insightful analysis 4 i read some codes in the toolkit and found it is wellorganized 1 i think more background about knowledge tracing baselines will be helpful docsepthis work focuses on the knowledge tracing problem the authors point out that the preprocess procedures in existing works are often private and the evaluation protocols are different and far away from the real world senerio to address this issues this paper presents a comprehensive python toolkit named pykt the toolkit provides standardized dataset preprocess procedure and several popular dlkt model implementations 1 this paper presents a problem statement section which is readerfriendly 2 this work is overall wellmotivated and discuss its application in realworld scenarios 3 the authors present detailed analyses and insights into the experimental results 1 potential limitions are not discussed in this paper 2 the authors should provide more instructions on how to contribute to the pykt docsepthe authors state that data preprocessing procedures in existing dlkt approaches are often private or custom and they differ in terms of the evaluation protocol to address these and make valid comparisons across dlkt methods happen they introduced pykt library it consists standardized preprocessing procedures on 7 popular datasets across different domains and 10 stateoftheart model implementations the authors provided 5 observations and suggestions from their results one of these observations suggests that wrong evaluation setting may cause label leakage that generally leads to performance inflation the problem and the contribution are important and explained very clearly the authors propose a platform which helps different methods to be compared and evaluated in a more standardized way they worked on 7 popular datasets in kt and 10 stateoftheart dlkt models their opensource library which includes some dlkt implementations and evaluation protocols can be valuable for future research some minor comments the authors discussed about evaluation protocols comparison of different models and so on but i do not see a very concrete discussion regarding scalability which could be nice to include table 1 basically repeats some of the statistics in the text which could be avoided docsepthis paper introduces pykt a python library providing implementations of 10 deep learning based knowledge tracing dlkt models this library also includes standardized data preprocessing methods and evaluation protocols allowing to benchmark the main deep learning based approaches to perform knowledge tracing the paper also proposes to perform a benchmark of these models on 7 datasets the authors tackle an often neglected but still extremely important aspect which is reproducibility of existing methods they provide opensource and ontheshelf implementations of existing methods together with readytouse data preprocessing data splitting and model evalutation protocols this work should ease reproducibility and reduce barriers for new methods to compare to existing ones the benchmark are well conducted with extensive hyperparameter search the code is documented with an online documentation but i think some work remains to be done on this documentation to easily use this software for instance there is no quickstart available in english and many of the methods are not documented i am afraid this is a brake to make the code easily usable maybe the authors can consider adding some notebooks to illustrate a basic usage of their software as well the authors states that they are open to new contributors and so i suggest they polish their documentation and include some guidelines for new comers to know how to easily integrate their models and contribute the repository does not include any unittesting tool allowing continuous integration docsepthis paper conducts a comprehensive comparison across a representative sample of deep learningbased knowledge tracing dlkt models producing several insightful if worrying findings including that recently proposed methods hardly outperform one of the first dlkt methods published seven years ago i think this on its own is a major contribution to this area of research and so i am recommending acceptance a comprehensive comparison across a representative sample of deep learningbased knowledge tracing dlkt models this comparison yields several worrying findings such as the existence of data leakage artificially increasing performance metrics publically release a benchmark to standardize evaluation for future work in dlkt methods at least for english the github repository does not appear to have any instructions beyond installation see documentation below docking a point for this but would be happy to raise my score if the authors agree to add these instructions ### Summary:
this work develops a pythonbased benchmark platform pykt that implements several deep learning based knowledge tracing models the research is well motivated and the paper is well written the experimental section is also thorough and provides procedures for handling several popular datasets across different domains the reviewers raised some minor concerns and the authors are requested to address them in their final submission
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies the multisource domain adaptation msda problem the authors argue that the existing msda solutions 1 do not explicitly consider distribution conditioned on labels of each domain 2 rely on limited feature extraction based on one extractor 3 do not well explore target data due to the absence of label correspondingly multiepl is proposed based on moment matching although the design of the proposed method seems reasonable its novelty is marginal additionally the evaluation in the experiments is somewhat unfair i vote for rejection the paper is well organized the technical details are clearly presented a few comments are summarized below in eq1 the authors minimize the discrepancy between every two distinct domains it is unclear to me why not minimize the pairs of each source domain and target domain npairs the goal is to align the distributions between source and target please clarify the motivation of aligning two source domains besides it would be good to have one baseline only considering labelwise moment matching losses for only between n source and 1 target pairs in page 5 the motivation of diversifying features from different extractors is unclear to me please clarify the benefit of classifying feature according to extractor id labels moreover the ablation study presented in section 53 does not show a clear improvement by introducing the diversifying loss i would encourage the authors to design another analytical experiment to show its effectiveness the performance improvement compared to the state of the arts is limited specifically for digitfive dataset a missing recent work ref1 reports an average performance of 918 to show the consistent performance improvement over this strong baseline id encourage the authors cite and compare it under the same setting ref1 hang wang et al learning to combine knowledge aggregation for multisource domain adaptation eccv 2020 besides ref1 there are several other recent msda papers are missing including but not limited to ref2 chuang lin et al multisource domain adaptation for visual sentiment classification httpsarxivorgabs200103886 ref3 haotian wang et al tmda taskspecific multisource domain adaptation via clustering embedded adversarial training icdm 2019 other minor point a typo in page 7 threshoold threshold updates thanks for the authors response some of my queries 1st and 3rd were clarified however unfortunately i still think more needs to be done to show the superiority of the results i retain my original decisiondocsep summary and contributions in this work the authors proposed an algorithm for multisource domain adaptation while the results seem promising the technical contribution is incremental and limited meanwhile more empirical results are needed to validate the effectiveness of the framework strengths the paper is well written and easy to follow the problem investigated in this paper ie multisource domain adaptation is of significance weaknesses the technical contribution of this work is limited labelwise moment matching or adversarial training eg 1 has been a common practice in singlesource domain adaptation the authors simply applies this idea of multimode aware domain adaptation to multisource domain adaptation moreover comparing the first line ie multi0 in table 2 with m3sda in table 1a we find that this labelwise momentmatching makes almost no contribution the empirical results especially the ablation studies do not hold pseudolabeling the unlabeled data in the target domain and using multiple feature extractors can also be easily used by m3sda and dctn expecting a definite performance boost i expect the authors to provide such results moreover pseudolabeling and ensemble learning however are not novel they are widely adopted techniques and can be easily incorporated into any algorithm eg m3sda for performance improvement even on the dataset officecaltech and amazon reviews the performance improvement of the proposed algorithm is minor 1 pei z cao z long m wang j 2018 multiadversarial domain adaptation aaai 2018docseppaper summary the authors propose a novel method for multisource domain adaptation msda for effective adaptation the proposed method adopts three techniques 1 labelwise moment matching 2 pseudolabeling target data and 3 ensembling multiple feature extractors experimental results show that the proposed method outperforms several stateoftheart methods in both image and text domains review summary although the design of the proposed method seems reasonable its novelty is marginal additionally the evaluation in the experiments is somewhat unfair i vote for rejection details strength this paper is wellorganized and is easy to follow i believe that the proposed method can be easily implemented without any obstacles good performance in both image and text domains is appealing such results should be highly appreciated especially in machine learning community weakness and concerns marginal novelty the three techniques that the proposed method adopted are all similar to those already proposed in the literature i could not find any novel and specific design or strategy to combine them specialized for msda classwise distributional alignment is a common technique in recent domain adaptation methods eg r1 and r2 r1 a dirtt approach to unsupervised domain adaptation iclr 2018 r2 unsupervised domain adaptation via regularized conditional alignment iccv 2019 pseudolabeling is also a common technique in recent domain adaptation methods eg r1 and r3 r3 asymmetric tritraining for unsupervised domain adaptation icml 2017 using multiple feature representations is not so common but is presented in r4 and r5 r4 domain adaptation with ensemble of feature groups ijcai 2011 r5 domain separation networks neurips 2016 the design of the feature diversifying loss is not reasonable it can be minimized by just making feature representations to be easy to discrminate their extractors which does not necessarily increase the diversity of the representations for example given two extractors that share the same parameters adding a large offset to outputs from one extractor leads to high performance of the extractor classifier but does not increase diversity of the feature representations the exprimental setting is somewhat unfair since the proposed method utilizes n feature extractors the model complexity in the proposed method should be n times larger than that in existing methods docsepin this paper the authors propose a multiepl for multisource domain adaptation the key idea includes two folds 1 to align labelwise moment and 2 to ensemble multiple feature extractor experimental studies on 3 datasets are done to verify the proposed mutlepl overall the paper is wellwritten the technical approach is simple and sound my major concern is on the technical significance of the method here are the detailed comments 1 one motivation of the paper is that current methods fail to consider the shifts among sources however there are some multiple source transfer methods explicitly modelling the interdomain similarities eg ref1 the paper also misses some important multiplesource references please refer to the survey ref2 for different types of multiple source transfer methods it would be better to have a comprehensive discussion on the related works 2 the proposed labelwise moment matching is not new in transfer early subspace based work eg jda ref3 and latest semantic deep learning based transfer methods eg ref4 share the similar idea 3 the threshold taw is used to obtain the good target labels on the one hand it is unclear how this parameter should be set for different transfer tasks on the other hand high confidence score does not imply correct target label prediction error reinforcement may happen even with a welltuned taw 4 the usage of ensemble feature extractor is actually using high complexity to enhance prediction accuracy the scalability could be an issue of the proposed method especially considering that multiple source may have extremely large data size 5 sensitivity analyses on the balancing parameters alpha and beta should be done 6 why data augmentation is done on officecaltech10 datasets there are many datasets containing multiple domains with sufficient data eg officehome please use these datasets instead of constructing artificial realworld dataset 7 the baseline methods can be further improved please consider ref2 for more baselines regarding the current results mutltepl performs larger improvements over m3sda on the 2nd 3rd and 4th tasks in digit five dataset while only achieves marginal improvements in other tasks eg tasks in officecaltech10 dataset more analyses on the performance difference of different tasks should be discussed 8 based on the ablation study mutileplr achieves comparable results with multiepl which indicates that the extractor classifier and feature diversifying loss have less importance in the overall objective ref1 sourcetarget similarity modelings for multisource transfer gaussian process regression ref2 multisource domain adaptation in the deep learning era a systematic survey ref3 transfer feature learning with joint distribution adaptation ref4 deep transfer learning with joint adaptation networks update thanks for the authors response however i am not convinced on several points eg 3 7 considering the other reviewers comments i think the paper needs to be further improved thus i will keep my score ### Summary:
in this paper the authors proposed a solution to the problem of multisource domain adaptation all the reviewers have two concerns 1 the technical contributionnovelty is limited and 2 the experimental results are not convincing therefore this paper does not meet the standard of being published in iclr the authors are encouraged to improve this work by addressing the issues raised by the reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper looks at lstms with the intention of understanding their functional connectivity i am not sure exactly what the relationship between the brain and lstms is being assumed or proposed herein however i understand the need to understand complex neural networks regardless of their relationship to biological systems i would have liked to have a discussion with respect to what the hierarchical organisation is due to is this merely a repercussion of the connectivity for example what do the authors think in terms of work that looks at ablation ie damage it might be useful to bear in mind limitations of such work if various seemingly perhaps extraneous factors are not taken into account see httpsdoiorg101007s4211302000081z i think this paper can be polished to the level of a solidly good paper if the authors can sketch out a bit more their rationale and syllogisms with respect to my above questions minor figures are very hard to read is it possible to redesign them slightly to make the text bigger in latex to open double quotes you need to use two backticks also the cite and citep commands should be used appropriately in terms of places where citep is needed as well as use of optional arguments to avoid double parentheses docsepthis paper applies tools from neuroscience to understand how language models integrate across time the basic approach is to present a phrase preceded by two different context phrases one that is natural ie the phrase that actually preceded it in the corpus and one that is randomly selected the authors then measure how long it takes for the unit activations to become similar for the two different contexts which provides a measure for how long the context impacts the representation they find that 1 timescales increase at later layers of the language model 2 that only a small fraction of units exhibit long timescales 3 that longmediumtimescale units appear to come in two forms which they try and characterize using graphstyle analyses pros how language models integrate across time is clearly important and this paper describes interesting first steps in characterizing the analysis of time using relevant tools from the neuroscience literature the method presented is simple and broadly applicable the graphstyle results seem intriguing if a little hard to make sense of i also think that the sparsity of the longtimescale units is cool and interesting limitations and questions 1 its not clear to me if the notion of time is a meaningful one in a language model for example the duration of contextual effects on a unit that codes syntactic number will presumably be highly variable and depend upon the details of the particular sentence being encoded thus a natural question is how variable are these timescales from momenttomoment whats being plotted is the average across a bunch of sentences segmented at a particular moment a conjunction how robust are these results if one examines a different point in a sentence are the timescales of some units more variable than others update the authors have repeated their analysis for a different sentence point after the 10th word and report similar results this analysis is helpful though of course the 10th word is not a very principled break point and there presumably is a lot of variation in timescales that are being averaged across i continue to wonder how meaningful the notion of an absolute timescale is 2 none of the steps in the graph analyses seemed particularly natural or wellmotivated to me why were the graph edges thresholded at z5 and why was kcore analysis performed i find it hard to make sense of what this analysis tells us about how language information is processed is there some reason why medium timescale controller units and longtimescale integrator units should help with language processing if these results are purely exploratory and lack a clear interpretation then perhaps the authors could help the reader by explaining the thought process behind the exploration perhaps starting with the mds plot would be useful rather than the kcore analysis because the mds plot clearly shows some interesting structure the authors have motivated some of their analyses by discussing brain research reporting that longertimescale regions are more densely connected of course the relationship between connectivity between largescale brain regions and the units in a lstm remains highly speculative but having some motivation is helpful 3 it would be interesting to know how dependent these findings are on the models architecture would similar results be found for a transformer or a simpler grustyle rnn the authors have attempted to address this point but with limited time were not able to train a network to a high level of performance minor points in figure 4 it would be helpful if the absolute timescale was labeled in all plots rather than the rank of the unit or the normalized timescale the absolute timescale seems much more meaningful to me and the units can of course still be ranked just the axis labels changed or augmented the legend for figure 4c is incorrect docsepthis paper explores the application of innovative methods to track the flow of linguistic information in lstm language models in particular the overarching question is how contextual information might be encoded in the network at the level of single units and how context disruption might alter the lstm dynamics and thus impact its predictive ability the paper is clear and it tackles an interesting question the approach is well motivated and the authors give a brief survey of the most recent applications of this kind of methodology in linguistics and cognitive neuroscience studies the methodology is generally appropriate though some details and parameters eg numerical thresholds seem to be chosen arbitrarily also the analysis could be improved by applying statistical testing in order to better quantify the strength of the observed effects overall i think this is a nice paper though it might be especially relevant to the linguistics community rather than to the iclr community moreover i think that further analyses are required in order to better clarify some important aspects in particular i think that ablation studies should be performed in order to better identify the functional role of the controller and integrator units whose actual functional role remains a bit speculative and mostly based on structural connectivity information it would also strengthen the paper to have some more controlled simulations where the contextual information is defined according to specific linguistic constraints in order to better characterize what the target units are actually encoding indeed as also noted by the authors almost all the long timescale units are of unknown function finally i think that it would be important to establish whether these findings are generally applicable to lstm models regardless of the specific architecture under investigation eg what happens if we force the lstm to rely on fewer units does the hierarchical organization of the context improve by adding more layers other comments why did the author choose to test the model on a different corpus anna karenina novel rather than considering a test set from the same corpus from which the training set was derived the tolstoy book might have a quite different linguistic structure from that of the corpora used to train the lstms it might be informative to also include a third condition inbetween intact and random context where the same context words are maintained with scrambled order this would allow to better understand the role of individual words in shaping context representation and activating the lstm units in fig 1d it is interesting to note that the unit 823 green line actually exhibits a sharp increase in difference after the shared segment starts do the authors have a possible explanation for this kind of phenomena was it observed systematically in other units in relation to the results shown in fig 3a i did not understand how the thresholds and parameters for the kcore analysis were chosen pg 3 there is a typo regarding the size of the output layer 50000 in fig a1 error bars would help in better understanding the actual difference between the curves in order to improve reproducibility it would be very helpful to share the source code used for these analyses docsepupdate after author response i think this is a very promising paper and i am really excited about seeing techniques from neuroscience employed to answer questions about neural network models the authors have further conducted several additional experiments after reviewer comments which i appreciate however my most fundamental concern the mismatch between the method and the way that it is validated unfortunately still stands which is why i would encourage the authors to further pursue this line of work but recommend to reject it for iclr summary this paper proposes to apply timescale methods from neuroscience to investigate the timescale organisation in neural language models more specifically the authors test the timescale of individual units in a word and characterlevel lstm by comparing the units activations values on the same sentence but with different contexts using this method the authors first show that the higher layers on average have longer timescales they then for all units they fit a logistic function to the recovery curves and use the halftimes of this curves as an indication of the time scale of these units they test the syntax unit and two longdistance units found by lakretz et al and show that the number units have similar timescales while the syntax unit have a longer time scale lastly the authors analyse the connectivity between the longer time scale units and find that the units with longer processing timescales make a larger number of strong projections within these units the authors identify two sets of units in the wordlevel lstm controller units that play a role in how the connectivity of the network is updated and integrator units that instead integrate information strong points neuroscience has long been asking questions about the brain that are very similar to the questions we now ask about neural networks crosspollination between these fields is extremely important and this paper contributes to this aside from the main technique the paper introduces some interesting and useful methods such as projectivity analysis and kcore analysis i think these methods can be useful for other researchers as well time scale analysis of lstms is a very relevant and interesting topic that deserves more attention than it is currently getting concerns my main concern is that there seems to be a mismatch between the language time scales on which the authors operate their experiment is designed to investigate the impact of extrasentential context but the lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant within a sentence which is a different scale in other words the units found by the authors of this paper are longdistance when it comes to integrating context but the syntax and number units found by lakretz et al are not really related to that they model relationships within sentences theoretically speaking they should be reset at the beginning of every new sentence and they should thus be completely independent from the content that the authors find this to be untrue is interesting but inconsistent with what lakretz et al describe these unit do since this is not addressed at all in the paper it makes the results in general a bit difficult to interpret update after author response in their response the authors clarified that the they have only analysed single sentences where two distinct subsentences are combined with a conjunction this unfortunately does not make a difference for the argument whether two sentences are split by a full stop or instead concatenated with and does not make any difference for the argument above since the subjectverb agreement relationships that the units the authors look at model do not cross these boundaries either furthermore in their response the authors state that the find that the context representations of units was reset at sentence boundaries as i asked before i appreciate that the authors did these additional experiments but i find the result somewhat worrisome since the units they are looking at are syntactic units that encode number across long distance subject verb relationships they should be reset both when a new sentence starts as well as when a new conjunct with a new relationship starts in terms of sv relationships there should be no difference between the boy kicked the ball and the girl caught it and the boy kicked the ball the girl caught it that the authors do find a difference points to a potential flaw in methodology relatedly the authors say that their result that the syntax unit is a long distance unit while the number units are not this is not consistent with what they say in the related work of the section but also not with the results reported by lakretz et al who hypothesise that the syntax units represent the depth of the syntactic dependency this is something that changes with every new incoming word whereas the number units are the ones that have to keep their activation constant across time while as i said before i think it is great that the authors try to use methods from neuroscience into the field i do think that in this case the main method they propose is only very marginally different from earlier work in particular khandelwal et al perhaps it would make more sense to put a bit more stress on the rest of the methods as well btw also lakretz et al do connectivity analysis the results are a bit underexplained and understanding them requires many back and forths to the appendix i would have appreciated a bit more motivated interpretation of several aspects for instance why is there such a large difference in activation differences in different units in the preshared segment part and is this related to the halftime it seems so from the plots what is the difference between character and wordlevel models in terms of expectations wed expect there to be an additional level of timehierarchy perhaps how do assessing activation differences and correlations differ in terms of conclusions these things should in my opinion all be worked out a bit better lastly there are a few unsupported claims the most important of which that their method recovers the previously discovered units of lakretz et al while as far as i understand they actually only use their method to analyse those neurons but did not find them independently for other suggestions and comments see below to summarise while i think the idea is very nice and definitely worth working out further i do think that some work is needed to make this a publishable paper suggestionscomments for authors typographic if you use quotes in latex you should use different ones for left and right for them to appear correctly check for instance line three in the introduction to prevent additional spaces after abbreviations like eg and ie put a backslash eg lerner et al put all references within parenthesis introduction switches from present tense to paste tense in the last paragraph we measure the timetaken for the effect of this prior context to decay see methods i dont really understand what this means you measure how long it takes for these changes to not be measurable anymore try to avoid double parethesis with abbreviations eg wlstm gulordava et al 2018 should be wlstm gulordava et al 2018 you can do this with citeptext beforetext aftercitation has an 650dimensional has a 650dimensional without finetuning to the novel i first thought this sentence was unfinished until i read back and realised that the novel is your corpus this is a bit confusing perhaps you could rephrase how the cell state activation differ how the cell state activations differ we will see that the activation difference drop quickly drops quickly see the activation difference drop quickly there are several references that were published at acl conferences that are listed as arxiv papers in the reference list lakretz et al gulordava et al khandelwal et al content i would say that the conclusion that overall prior works suggests that a small subset of units track longrange dependencies is rather overstated lakretz et al found that the units representing long distance number information were sparse but this does not imply that long range information in general is represented sparsely their method also focusses quite exclusively on finding sparsely distributed properties as more distributed properties cannot be found with ablation furthermore this is just one study focusing on one syntactic aspect i would suggest to rephrase this a bit lakretz at all actually identified several syntax units but only one of them was interpretable i find it a bit confusing that in 32 second paragraph you first talk about comparing cell state activation then say that you compare hidden state activations and then talk again about the cell state activation figure 1 c d i dont think these figures add much to the paper for the following reasons i they show only individual units and no average making it difficult to interpret the values ii while as pointed out in 51 the rate of decay is the most important the cutoff point is not indicated in the figure which puts a stress on irrelevant aspects the actual difference between the two lines i would appreciate to have figure a1 in the main text it is important for the story ### Summary:
this paper applies methods inspired by neuroscience to analyze the inner workings of lstm language models in particular a simple and clever approach is proposed in which a sentence is presented in its observed context vs a random one the time for a unit activation to become similar in the two contexts is used as a probe of the timescale of contextual effects the main results are that timescales increase with layer and that there are two classes of longtimescale units with different graphtheoretical properties the functionality of syntaxsensitive units previously identified in the literature is confirmed finally the analysis is replicated for a characterlevel model the paper received detailed and insightful reviews and there was a lively but always respectful discussion between authors and reviewers overall the reviewers liked the topic of the paper and the overall methodology however they had several issues with it one of the issue pertained to the holistic approach to time in the paper which is measured in number of tokens rather than in terms of syntactic distance more in general there was a feeling that the paper was somewhat short on actual insights on the exact functional role of units in a linguistic context the reviewer who assigned the most severe score was mostly concerned about one specific instance of this namely the fact that the authors focus on syntaxtracking and number agreement units whose scope should not really extend across sentences moreover the reviewer was surprised that the syntaxtracking units maintain information across longer distances than the numberagreement units that should by definition keep track of longdistance relations i am divided i welcome work that focuses on novel qualitative and quantitative analyses of an existing model i wished there were clearer takehome messages on how lstms process language but i recognize that our knowledge of deeplearning models is very preliminary and i am thus not surprised that the conclusions are not entirely clear the reviewers raised important concerns but i would not confidently claim that we know enough about the relevant units to be genuinely surprised by some of the results for example can we really say that numberagreement units are only limited to clauseinternal agreement tracking couldnt it be say that we will discover in the future they also play a role in tracking discoursedetermined pronominal number going out on a random limb here of course overall i would like to see this at least as a poster at the conference but i am assigning low confidence to my recommendation as i respect the reviewers point of view
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper explores an alternative loss function for fitting critic in reinforcement learning instead of using the standard mean squared loss between critic predictions and value estimates the authors propose to use a loss function that also incorporates a variance term the authors dub the approach avec the authors combine their approach with popular rl algorithms such as sac and ppo and evaluated on the standard benchmarks for continuous control although the paper demonstrates interesting empirical results i think that the current experimental evaluation has a number of flaws that prevent me from recommending this paper for acceptance the paper provides basic motivation but it is lacking thorough theoretical investigation of the phenomena also the proposed loss is biased in the stochastic mini batch optimization due to the expectation under the squared term that is not addressed in the paper either finally i have major concerns regarding the experimental evaluation the set of openai mujoco tasks is different from commonly used tasks in literature in particular hopper and walker2d which are used in the vast majority of the literature are ignored in table 1 and figure 2 this fact raises major concerns regarding generality of the approach in conclusion the paper presents interesting results on some tasks for continuous control however the paper requires more thorough experimental evaluation to confirm the statements also a deeper theoretical analysis will greatly benefit this work i strongly encourage the authors to continuous working this approach and revise the paper to improve the theoretical and empirical analysis this paper presents a very interesting idea but in the current form it is not ready for acceptancedocsep strengths the paper proposes a simple and elegant idea for changing the value function objectives in deep rl and demonstrates reasonable empirical evidence of its potential usefulness the authors also provide a clearly articulated intuitive motivation and provide experiments to support the proposal the idea complements several other algorithms and is therefore quite widely applicable and easy to try the analysis of the experiments is also quite interesting and clearly presented weaknesses the paper is mostly well written and has interesting theoretical insights as well as empirical analysis here are a some weaknesses the theoretical justification for the variance reduction while technically correct seems like it should be miniscule in theory for the t independent rv case being analyzed the condition required for the improvement is that delta triangleq 2 mathbbvxi frac1t sumj1t mathbbvxj 0 which seems reasonable unless the sample in question is an outlier with a very small variance to begin with however the overall reduction itself has another frac1t scaling ie the variance reduction over the squared error case is equal to fracdeltat which seems to be vanishingly small as the number of samples t is large even if delta gg 0 note that for the situation where this core idea is being applied the parameter t is approximately the number of samples in the expectation over s a which is large in practice the improvements are a good sanity check but somewhat marginal in many cases especially given the error bars additional commentsfeedback in section 42 paragraph on statevalue function estimation line 3 should the targets be widehatvpi rather than vpi in figure 1 some additional detail on the claims seems necessary eg what parameterization is being considered in the discussion below the specification for mathcall1avec mathcall2avec the authors say the reader may have noticed that these equations slightly differ from eq 3 but i am not able to see what difference is being alluded to figure 4 looks quite surprising in terms of the large qualitative difference between the baseline and avecbaseline graphs just to be sure do you measure the fit with respect to fphi or the bias corrected version gphi obviously the latter makes more sense the ablation study in section 54 seems intriguing but what the conclusions imply seems unclear it appears the authors were expecting to see some nonzero value of alpha to improve over alpha0 avec but this isnt the case some additional clarification here would be useful also it is a bit confusing to separate the plots into two depending on whether the weighting is less than one as im guessing the exact same plot is used for the nonalpha versions in each pair of these graphs in figure 5 the distance to the true value function seems to be relatively flat or even mildly increasing through the entire horizon in both graphs is this simply due to the resolution as id expect there to be a drop at least in the initial phase over time docsepthis paper presents avec a new critic loss for modelfree actorcritic reinforcement learning algorithms the avec loss can be used with any actorcritic algorithm with ppo trpo and sac being evaluated in the paper the loss builds on the meansquarederror and adds a term that minimizes es fphis hatvpithetaks the addition of that extra term is motivated by recent research on the stability of actorcritic algorithms and the benefits obtained by the avec loss are empirically demonstrated in numerous environments with avecppo avecsac and avectrpo quality the paper presents an interesting idea that is simple but wellmotivated and leads to encouraging empirical results both the theoretical and empirical motivations are strong clarity the paper flows well and is quite clear however an intuition for what the added term in the avec loss is missing section 42 motivates the added term in a mathematical way but a few sentences explaining what the added term does in simple terms may help the readers understand why avec is a better loss than simple mse originality the contribution of this paper seems original it builds on recent work but the recent work identifies problems while this paper offers an original solution to these problems significance the fact that avec provides good empirical results and can be used as the critic loss of any actorcritic reinforcement learning algorithm points at the high significance of this work many actorcritic implementations can easily be improved by using the avec loss another positive point is that the paper discusses how to implement the avec loss in algorithms that fit a neural network on batches of samples this really helps implementing the proposed loss that contains an expectation in an expectation and is therefore not trivial to properly implement in general i like this paper and recommend acceptance a few questionsissues an explicit mention of the gradient of the loss or at least a discussion of where to stop backpropagating gradients would have been interesting fphi appears two times in the avec loss and it is unclear whether the loss contributes to gradients in fphi two times or if the expectation over states is first computed without computing any gradients and then used as a constant in the rest of the evaluation of the loss as mentioned in clarity an intuition of what the added term of the avec loss does especially since it is inserted in the meansquarederror inside the square would help the less mathematicssavvy readers it is not crucial to understand the paper but the generality of the approach proposed in the paper may lead it to be used often by students and so an intuition of why avec works and what it does would greatly help author response the authors clarified my questions so i maintain my recommendation for acceptance ### Summary:
this paper is accepted however it could be much stronger by addressing the concerns below the theoretical analysis of the proposed methods is weak as far as i can tell the proposition has more to do with the compatible feature assumption than their method furthermore the compatible feature assumption is very strong and not satisfied in any of their experiments sec 42 does not provide strong support for their method r2 points out issues with their statements about variance and the next subsection argues from an overly simplistic diagram the experimental results are promising however r3 brought up important issues in the private discussion their implementation of sac systematically produces results worse than reported in the original paper they use a version of sac with automatically tuned temperature httpsarxivorgpdf181205905pdf 1a their sac gets average returns of 25k at 500k steps while the original implementation gets 3k at 500k steps 1b their sac on halfcheetah 10k at 1m steps original paper 11k at 1m steps 1c the same applies to humanoid there is no improvement with respect to the original sac their approach degrades performance on hopper they use nonstandard hyper parameters for sac 098 instead of 099 for the discount and 001 instead of 0005 for the soft target updates that might be the main reason why their sac works worse than the original implementation the authors use the hyperparameters suggested for halfcheetahbulletenv for all continuous control tasks for halfcheetah however the authors of the stablebaselines repository which this paper uses suggest to use the hyper parameters from the original sac paper httpsgithubcomaraffinrlbaselineszooblobmasterhyperparamssacymll48 nonetheless the results for the unmodified sac reported in this work for halfcheetahhopperwalkerant are subpar to the original results suggesting that the hyperparameters for halfcheetahbulletenv are suboptimal for these tasks given the simplicity of the change and the promising experimental results with some caveats i believe the community will find this paper interesting and will lead to followup work that can patch the theoretical gaps
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a method to transfer policies between different mdps based on the minimization of gromovwasserstein distance this distance provides a pseudoreward that can be used to learn via rl the optimal policy in the target mdp given an optimal policy in the original mdp the method is optimal if the mdps can be mapped into each other through an isometry but works also empirically in other cases strengths good mathematical grounding further exploration of an interesting alternative to map optimal policies to new embodiments possible practical applications well written weaknesses the main issue with this paper is the experimental evaluation the presented results are just images of three cases the images of the first case fig 3 are hard to see there is no numerical performance success reward some degree of progress this makes the experimental evaluation insufficient to understand the applicability of the presented approach add more experiments numerical performance different metrics in summary while the paper presents an interesting turn on previously presented ideas and the mathematical foundation is well worked out the experimental evaluation is insufficient to support conclusions about the method docsepin this paper the authors focus on a more general cross domain imitation learning problem where only expert demonstrations from one domain is available to solve such a problem the authors use the gromovwasserstein distance to align and compare states between tasks from different domains and propose a gromovwasserstein imitation learning gwil they also show theoretically the possibilities and limitations of gwil strengths 1 this paper introduces and addresses an important and general cross domain imitation learning problem 2 appling the gromovwasserstein distance to align and compare two mdp domains provides insights to study cross domain imitation learning 3 the proposed gwil is novel the authors also well justified limitations theoretically weakness 1 there are only 3 tasks shown in the experiment section more experiment results are preferred to show the effectiveness of the proposed solution 2 experiment results are not wellvisualized it would be better to give a link showing the results in animation in general the paper addresses a general cross domain imitation learning problem where only expert demonstrations are available and proposes a novel gwil though the experiment results are not visualized well the work is highly likely to show new insights to researchers in imitation learning and domain adaptation domains docsepa method is proposed for crossdomain imitation learning without resorting to any form of correspondence this is done using a gromovwasserstein distance between policies in practice euclidiean distances on collected stateaction pairs within a given domain which finds isometric transformations that best preserve distance measures between the two domains given an imitation domain and an expert domain with example trajectories a pseudoreward is computed based on the degree to which the distances from a state to its neighbors in the imitation domain are preserved in the expert domain given these pseudorewards as computed for collected episodes sac is used as an rl algorithm to optimize the policy the paper contributes both a theoretical analysis and experiments with umaze pendulumtocartandpole and halfcheetahtofallenwalker strengths novel idea to the best of my knowledge for a difficult problem will inspire future work on learning by analogy combination of theory and empirical experiments im surprised by the extent the method works in practice weaknesses with no learning curves presented it is unclear if the crossdomain imitation learning actually provides a benefit for nontrivial systems in terms of learning time or performance as compared to learning from scratch it would be beneficial to see these learning curves and a wallclock compute time comparison the limitations could be better articulated the scalability is unclear although it is unreasonable to expect the first iteration of an idea like this to scale right away the isometry constraint is likely to be limiting in many settings as is the choice of the euclidean distance metric in the stateaction space lack of an intuitive presentation of the gromovwasserstein distance i had to go elsewhere to obtain the intuition the actual method used to compute the gw distance on discretely sampled trajectories the notation used for the gw setsofstate actionpairs is confusing ie tau because the gwdistance is invariant with respect to temporal ordering to my understanding whereas the notation gwtau tauseems to imply that the ordering needs to be preserved perhaps introduce a different notation for the data when the temporal information no longer needs to be preserved the connections of the trajectories sa data occupancy measures needs to be better articulated for this reader is there a simple figure that could depict the essence of eqns 6 and 7 figure 1 appears to simply show rotations if the goal is to show isometry for translations it would be better to scatter the spirals more irregularly throughout the domain of the figure similarly why not include a reflection as stated in the caption in figure 3 the agents position is largely invisible most of the readers may simply think there is an editing mistake and that the same figure was included 8 times in figure 4 do the top and bottom row come from gwcorresponding stateactions re adding time to sa to preserve uniqueness wouldnt this cause problems given that the gwdistance would now include time the paper introduces a novel idea for imitation learning it likely has many limitations but the idea of find suitable imitationbased correspondences is one that is being pursued on multiple fronts and this is a new approach with a mix of theory and some initial proofofconcept examples the paper could do better at explaining core ideas and still needs learning curves in order to understand the benefit of the crossdomain transfer docsepthis paper frames crossdomain imitation learning as an optimal transport problem using the gromovwasserstein distance this problem is highly relevant to imitation learning settings where there is often substantial domain mismatch between action and state spaces eg a humanoid robot learning to walk from a human demonstrator the paper introduces a reward function that can be optimised and proves that this is equivalent to minimising the gromovwasserstein distance between state action occupancies of an agent and expert substantial discussion proofs are included to show that minimising the gromovwasserstein distance is equivalent to recovering an optimal policy up to an isometry this is both a blessing and a curse as it allows for optimal policies to be recovered under extreme changes in domain or differences but does mean that recovered policies could be entirely unsuitable due to isometry the paper is well written and concisely written although does get excessively mathy at times when a figure could be more helpful experimental results corroborate the proofs and propositions and highlight the value of the proposed approach strengths this looks to be a strong contribution attempting to solve an important problem the paper is wellwritten and relatively easy to follow and results and proofs are interesting questions its unclear how easy an objective the proxy reward is to maximise i would appreciate more clarity around this eg can you show some reward curves across multiple seeds runs and convergence the gromov wasserstein distance is quite an expensive distance to compute can you comment on the computational feasibility of optimising eq 7 and potential scaling issues repeated mention of seed dependencies and effects is also concerning i would appreciate more commentary on this while i agree there are certainly settings where the gromov wasserstein distance makes sense from an imitation learning perspective recovery up to an isometry can be prohibitive eg a human showing a drone how to takeoff could result in a policy that lands drones which is the opposite of what was demonstrated i would value some discussion on these limitations would it make more sense to optimise a different distance metric or to use a different eg noneuclidean kernel in settings like these i suspect this is a nontrivial choice that needs a substantial level of domain specific knowledge does this then run counter the original objectives of this work minor pg 1 intro is well written but it would be great if fig 2 could be shown earlier to give some more intuition into the gromov wasserstein distance and the solution framing pg 2 typo this takes us beyond limitation in existing imitation learning literature much is made around limitations around learning from nonexpert demonstrations i would be interested to hear how the proposed approach would cope with these eq 1 is in dire need of a figure to explain this as i understand it although all proofs are provided in finite action and state spaces the proposed approach is said to scale to continuous spaces as ultimately it is only reliant on a suitable kernel function that can be expressed for continuous spaces is this correct we will see that in practice running our method on different seeds enables to find an optimal policy in the agents domain how do we know which seed produced the right policy fig 3 needs improving it is extremely hard to see the agent fig 45 id love to see videos of these policies did it actually learn to balance cartpole or just to swing up i enjoyed reading this paper and think it adds greatly to the conversation around crossdomain imitation learning the proposed approach has a number of strengths and limitations which i would appreciate hearing more about particularly when it comes to convergence speeds repeatability and computational requirements but also whether strengthsweaknesses of optimal recovery of a policy up to an isometry have just shifted the need for specification of a mapping between expert and agent into a different domain post rebuttal comments thank you for engaging in the process i still believe that this is a good paper ### Summary:
all reviewers suggested acceptance of the paper based on that the paper addresses an important problem and presents and validates interesting ideas for approaching it therea are some concerns regarding limited experiments id like to encourage the authors to make an effort to address these concerns and also a few others raised in the reviews in the final version of their paper the authors already made several updates to their paper in that regard during the discussion phase so i believe that the paper would be an interesting conttribution to the conference and i am recommending acceptance of the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: interesting idea applying the representation invariance capability of deep sets to mosaics of patches extracted from a whole slide image the method was compared with a stateoftheart search engine with competitive results the data split also is properly described in the manuscript several elements of the method are not described in detail and it would allow for increasing the reproducibility of the presented methodology the initial selection of the image via clustering is not properly described the formulation of deep sets is missing in the manuscript docsep modelling the slide as one vector improves search speed over existing work based on bags of patches to represent wholeslide images the proposed approach outperforms previous work eg yottixel in searching similar cases in most primary sites of tcga one reference is missing in section 2 i dont understand what is the advantage of making a mosaic with 40 images and make batches with 16 of those if in the end all patches are treated independently by efficientnet as 640 different samples the yottixel method is heavily used in this paper from patch sampling and clustering to using its search functionalities however this makes the paper lack many technical details for example patch sampling and clustering techniques and also makes it difficult to clearly appreciate what the novel contribution of this work is since it heavily relies on previous work if the novelty is in the use of a single vector to represent the slide instead of a bag of patches then this paper would have benefitted from a more detailed comparison between the two strategies focusing on failures of one or the other method for example why are there 46 brain patients and cnnds finds 91 are the 45 additional cases some false positives a confusion matrix would have helped as well as some discussion on how to further improve the presented method the deep sets method is used based on previous work but is not explained in this paper the use of a permutation invariant approach based on deep sets is not fully justified and it is not clear why other approaches like pooling would not work here no visual examples are shown no discussion on cases of failure and possible reasons why this happens docsepthe authors focused on solving an important task in computational pathology that is fast wsi searching the performed experiments show method potential and possibility moreover the authors compared their approach with another available method shows significant improvement the method description should be extended it is not clear why mosaic is created from 40 patches how method will works if we use 6432168 patches will be faster lack of details about applied data augmentation the related work section does not include sufficient information a few very important approaches such as 1 or 2 are missed 1tellez d litjens g van der laak j ciompi f neural image compression for gigapixel histopathology image analysis ieee trans pattern anal mach intell 2021 feb432567578 doi 101109tpami20192936841 epub 2021 jan 8 pmid 31442971 2campanella gabriele vitor werneck krauss silva and thomas j fuchs terabytescale deep multiple instance learning for classification and localization in pathology arxiv preprint arxiv180506983 2018 docsep the proposed framework is technically sound and easy to implement it is an interesting topic which is very challenging and notoriously difficult to address this paper is clearly written and easy to follow the proposed method is only compared with one method the proposed method is only evaluated on one dataset some methods which are related to the proposed method are cited but not compared eg hematis work 1 1 sobhan hemati mohammad hadi mehdizavareh shojaeddin chenouri and hamid r tizhoosh a nonalternating graph hashing algorithm for largescale image search arxiv preprint arxiv201213138 2020 ### Summary:
initially the majority of reviewers suggested a weak rejection after the rebuttal one reviewer changed their opinion to weak accept giving an even split between weak reject and weak accept i do think the authors did an adequate rebuttal specifically the new experiment on lung cancer to compare against the stateoftheart as such i propose acceptance as a poster presentation i do agree with the comment by reviewer 1 that some additional discussion on the final experiment should be added to the cameraready paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: knowledge of the underlying graph is not required to estimate spaceiv the results section is fairly comprehensive and investigates violations of the assumptions and compares against an existing approach and two oracle algorithms the graphical conditions for identifiability seems to be quite restrictive and simulations suggest that the spaceiv estimator can be quite off when assumptions are violated consider defining terms that might not be obvious such as id and im i recommend providing a discussion about when we might expect the identifiability conditions of this model to hold in the real world docsep1 the idea of the identifiability assumptions is new 2 the introduction of the algorithm for the proposed estimator is clear 1 the proposed assumptions are not intuitive 2 there are some errors in the proof of theorems 3 the proposed estimator in the algorithm is not consistent with the estimator used in the numerical experiments 4 the proposed method performs bad when the sample size is not too large and there are many outliers in the estimates even if the sample size becomes large 1 i think in the introduction selection this is the case if we being independent of x and i should be changed to this is the case if we being independent of i because if h and varepsilony are independent of x then we do not need to use the instrumental variable method 2 can you explain more about more precisely we can choose iek with ek kin1m being and ksim u1ldotsm based on the definition of i i belongs to rm i feel they are contradictory 3 in proposition 2 i think it is better to explain the meaning of the notation dagger in the main paper because without referring to the appendix the readers may not understand the meaning of this notation 4 there are some errors in the proof of theorem 3 the formulas of 6 and 7 are wrong 5 can the proposed assumptions be verified in practice 6 in the algorithm it proposes to use the limited maximum likelihood estimator liml but in the numerical experiments it says that due to computational reasons in the experiments we use the twostage least squares estimator instead of liml the former estimator minimises the enumerator of 14 on one hand they are not consistent on the other hand it also shows that liml is not a good choice in this algorithm in addition can you introduce more about the liml method what is the meaning of the former estimator minimises the enumerator of 14 7 based on the simulation results in figure 4 spaceiv estimator performs bad when the sample size is not too large the range of the estimated values is very wide and there are some outliers which means this estimator is not stable 8 there are some typo errors in the paper for example in figure 6 a1a3 needs to be changed to a1a2a3 please check all the typo errors carefully docsep the paper presents some potentially novel and useful ideas while i am not aware of many cases where multiple instruments are available the authors motivate the work in the setting of multiple experiments which may be represented as multiple instruments while some of the ideas seem to have merit the paper seems to be written very hastily to be published as is i found the notation very confusing for example in section 2 instruments and covariates are indexed ij xj in section 21 they are indexed ij xj and as far as i could tell in section for they are just denoted using integers eg s is a set of nodes but takes values in 1 dots d these inconsistencies make it very hard to read the paper and understand the technical part and many notations are not explained very well for example what does suppbeta in proof of theorem 3 mean isnt beta an element of matchal beta in addition the results are not explained in a very intuitive manner eg why is absolute continuity with respect to lebesgue measure important for identifiability how is condition b2 in section 4 a graphical condition i suggest that the authors rewrite the paper with a preliminary section where they can be explain all of the notations and definitions used in the text docsepthe paper is well written and for the most part easy to follow assumptions about the models and the corresponding graphs are well motivated and made explicit with helpful examples and descriptions both graphical criteria and and estimation algorithm are provided which makes the proposed methodology easy to apply in practice the paper suffers from some notational issues mainly some concepts are only partially defined or not defined at all making the paper at times harder to read than it otherwise would have been i will list these in more details in the detailed comments section 5 considers the estimation of the causal effect there seems to be a slight logical jump here since up until this point no distributional assumptions have been made aside from the absolute continuity but suddenly a test statistic is presented that is supposedly fdistributed i think the authors should clarify exactly what assumption have to be made about the model so that the test is valid the time complexity of algorithm 1 is not considered the algorithm involves evaluations over all subsets of specific size which at a glance would quickly make the proposed estimator unfeasible as the size of the graph increases in the simulations the authors consider a graph with 20 xvariables which seems small to me there are some notational issues in the paper in section 2 the authors present the scm but it is not fully specified how are the instruments defined in this model what are h and g in section 21 the graphical representation is defined somewhat informally later the associated graph g is used but it has never been defined and operations such as ancestors an are not defined is a node ancestor of itself for example i do not understand the motivation to call the components of instruments intervention nodes since these have nothing to do with interventions defined by the dooperator in the scm context i guess this is related to the last paragraph of section 1 but the motivation is not clear to me algorithm 1 performs possible a large number of hypothesis tests increasing the likelihood of type i error i wonder if the authors have considered to include a correction for this based on the number of tests made in appendix b the authors provide a proof of proposition 7 the claim is that paw in imb 0 but at the last step only an inequality paw in imb geq 0 is obtained is this a typo should the inequality be leq instead also the final line of the proof mentions lemma 7 while this is a proof of proposition 7 ### Summary:
meta review i thought the paper provides a novel family of assumptions concerning iv estimation with a large and structured treatment variable results are of much relevance to the uai community however some of the technical parts of the paper need to be cleared from minor but distracting mistakes the graphical characterization of assumptions does a solid job but it is still somewhat evolved
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper provides a benchmark to evaluate approximators for wasserstein1 distances as loss functions in the generative adversarial network setting s1 while previous works use discrete distributions for benchmarking solvers this work suggests continuous distributions which is a novel aspect for benchmarking w1 w1 the benchmark contains only one image dataset with a single mode faces the addition of more image datasets especially multimodal ones eg cifar10 would improve the versatility of the benchmark and extend it to conditional models docsepauthors propose a generic methodology to construct benchmark pairs with ground truth ot plan ot cost and ot gradient we can use this tool to evaluate the performance of the neural dual ot solvers approximating the wasserstein1 distance or the gradient of wasserstein1 distance specifically the authors employ the 1lipschitz minfunnel functions to compute transport rays and define the ray monotone map with them we can define a target distribution mathbbq and compute ot cost and ot gradient based on the original distribution mathbbp the authors provide an elaborate introduction to the wasserstein1 and its neural dual ot solvers followed by compact math proof about their benchmark pairs experiments are also reasonable it is also a good point of view to consider the gradient of the wasserstein1 distance some minor concerns is it hard to turn hyperparameters for this method for example when you compute the highdimensional benchmark pairs you choose bn sim mathcaln001 and p 8 how do you choose it how long does it cost for the hyperparameter search the dimension of images in reality is higher than 27 can this tool handle higher dimensions if we carefully choose minfunnel function u instead of randomly picking will the performance be better what will be the effect of increasing n and d paper mentions in wgans the solvers move the generated distribution bad images mathbbq in our construction to the real distribution good images mathbbp however mathbbp is synthetic distribution and mathbbq is computed ground truth real image distribution in the case of images benchmark why do the solvers move mathbbq to mathbbp instead of the opposite authors mention solvers mm mmr takes longer for training compared with gp so and lp is the time gap significant docsepmotivated by the lack of benchmarks for w1 dual methods other than perceptual measures such as fid or is this paper proposes to create a semisynthetic set of benchmark datasets with known optimal transport plans maps and distance to do this the paper first develops theory about maps that are optimal by construction then the paper proposes concrete methods for constructing the necessary functions and computing the necessary plans maps and gradients finally synthetic dataset pairs are generated from truncated gaussian data and celeba data at various dimensionalities and used to evaluate and discuss many existing w1 methods discusses good overview of w1 methods proves theoretical results about how to construct maps that are optimal wrt w1 proposes novel way to construct groundtruth semisynthetic benchmarks for evaluating wasserstein1 dual solvers provides code and datasets for benchmark datasets and algorithms evaluates the gradient of the w1 wrt the parameters which is actually most important for most generative methods only one realworld dataset celeba is considered and the synthetic datasets are quite simple ie truncated gaussians it seems including more realworld datasets even mnist or cifar10 would be useful or using interesting realworld tabular data for smaller dimensions eg even something like iris this limitation is mentioned in the text but does seem to be the main limitation it seems the benchmark only considers maps where the samples are grouped more closely together or the reverse maps that expand parts of the space or where some expand and some contract would be better it is unclear whether the benchmark maps properly represent realworld ot maps minor but nonetheless important for final paper all result tables are in the appendix and the figures are in odd places with nonstandard captions at least some summary table of the results and your recommendations for suggested methods based on context would be important to include what methods would you recommend and why the answer may be a combination of easeofuse convergence behavior and overall performance docsepthis paper proposes a benchmark to evaluate the methods of computing the wasserstein1 distance the authors construct 1lipschitz functions and use them to build ray monotone transport plans which yield pairs of continuous benchmark distributions in highdimensional spaces some wgan dual form solvers are evaluated using these benchmark pairs 1 this paper proposed a benchmark to evaluate the methods of computing the wasserstein1 distance the problem is interesting to the community 2 this paper is wellwritten and technically sound the method uses 1lipschitz functions to construct pairs of continuous distributions which is well designed 3 this paper thoroughly evaluates popular wgan dual form solvers in highdimensional spaces using these benchmark pairs 1 the title of this paper is ambiguous and may lead to inappropriate reviewers 2 the theoretical analysis and the intuition of the proposed method is weak it is unclear why the proposed method works well than previous methods 3 evaluating the wasserstein1 distance does not directly validate the superiority of the methods on specific tasks which may need more explanations docsepthis paper proposes a benchmark for computing the wasserstein1 distance the authors first propose to use 1lipschitz functions to build ray monotone transport plans and obtain known ot maps these ground truth maps are then used to benchmark dual ot solvers used in particular in the wasserstein gan framework this papers proposes a method to build known ot maps using 1lipschitz minfunnet functions this choice is clearly justified as these functions are universal approximator of 1lipschitz functions prop2 having known ot maps allows to faithfully compare the ot solvers they carefully build transport ray of these functions the paper is well written and easy to follow the authors tackle an interesting problem and having more comparison like this one is crucial i regret that the results of the benchmarks are only available in the appendices i would recommend the authors to include some of them in the main paper since those are the main results of the paper the restriction to 1lipschitz minfunnet functions seems to be a main limitation of this work it seems that in the experiments only one random start is considered is there any reasons why the authors did not perform multiple runs this seems to impede to assess the methods stability and robustness with regard to the random start and the parameters an and bn in the funnel docsepthis paper proposes a benchmark for methods computing the wasserstein1 distance section 1 summarizes background information on computing w1 often with the dual in eq 4 and 5 and how the w1 is used in gan training section 2 summarizes methods estimating the dual potentials and transport maps section 3 describes the benchmark distributions and section 4 shows the results of evaluating the methods on the results which are quantified in section d of the appendix approximating w1 computations is widely used and a difficult setting to benchmark because the groundtruth transport maps and distances are often not known i am not aware of an established w1 benchmarks and papers often have to rely on downstream tasks such as inception scores to justify an algorithmic improvement to the w1 approximation this paper presents nontrivial settings where the groundtruth transport map is known and uses it to the experimental results are thorough and the paper strongly shows that minimax methods solve the benchmark tasks in most settings at least for obtaining a gradient that approximates the true gradient while the paper proposes a new benchmark for approximating the w1 it unfortunately does not present results in established gan settings as the groundtruth maps are not known thus research that is ultimately focused on improving the w1 computations in settings such as gans may be able to use these benchmarks for preliminary experiments but these benchmark tasks may not reflect the true difficulties of these methods thus established and powerful it is not clear how solved w1 ot is how much work remains in the field and how many new directions this benchmark will enable in other words better solutions to this benchmark will not directly enable new methods or new gan results ### Summary:
this paper proposes a new benchmark to evaluate the solution of optimal transport problems the reviewers concur that the benchmark is wellexecuted and novel some are concerned that a better benchmark for ot problems will not drive progress as the successes of wasserstein gans occur despite their failure to solve ot however it seems like a useful intermediate check to deepen understanding of why wasserstein gans and models to come work by at least eliminating nonexplanations
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper describes a new loss function for training that can be used as an alternative to maximum likelihood cross entropy or as a metric that is used to finetune a model that is initially trained using ml experiments are reported on the wmt 2014 englishgerman and englishfrench test sets i think this is an idea worth exploring but overall i would not recommend acceptance i have the following reservations i found much of the motivationjustification for the approach unconvincing too heuristic and informal what does it mean to overgeneralize or plunge into local optima can we say anything semiformal about this alternative objective the improvements over ml are marginal and there are a lot of moving partsexperimental settings in these models ie a lot of tweaking the results in tables 2 and 3 show a 036034 improvement over ml using dsd btw what is meant by dsddeep or mldeep im not sure these terms are explained the comparison to related work is really lacking the attention is all you need paper vaswani et al reports 284410 bleu for these test sets respectively 34596 bleu points better than the results in this paper thats a huge gap its not clear that the improvements again less than 05 bleu points will remain with a stateoftheart system and i think the paper is misleading in how it cites previous results on these data sets there is no indication in the paper that these better results are in the literature some small things unplausible implausible husz 2015 showed that dp q is not identical to its inverse form dq p this is well known predating 2015 for sure docsepthis paper presents a new loss objective for nmt the main idea is to optimize an interpolation of klpq and klqp which is kulbackliebler divergence computed at the wordlevel for model distribution q and true distribution p the motivation is that klpq finds a q that covers all modes of the data whereas klqp finds a q that concentrates on a single mode so optimizing on the interpolation gets the best of both worlds in my opinion this is a relatively simple and known idea in ml but perhaps not in mt im not sure on the other hand the nmt experiments are wellimplemented and convincingly shows that it improves bleu on a wmt dataset in general the experiments look solid i applaud the multiple baseline implementations in particular even including the smt baseline the lack of transformercnn models is not a demerit in my opinion since the focus is on loss objectives and the lstm models are just as reasonable the paper is clearly written with a few exceptions it is not clear why you have to first train with ml before switching to the proposed dsd objective as such section 45 should be prefaced with a motivation also figure 3 is hard to read with the two kinds of plots maybe split into two figures an open question is does your model capture the issues of mode covering as mentioned in the motivation it would be helpful to include analyses of the wordlevel distributions to quantify the differences eg word entropy between ml and various kldsd solutions also i would recommend showing traintest set perplexity scores of the various proposed and baseline methods as a minor point for argumentation it is not clear that your proposal addresses the sequencelevel loss vs wordlevel loss issue it is conceivable but it seems indirect and there is no quantifiable connection between the wordlevel loss such as dsd and a sequencelevel loss or is there docsepthis paper describes an alternative training objective to crossentropy loss for sequencetosequence models the key observation is that crossentropy is minimizing klpq for a data distribution p and a model distribution q they add another loss that minimizes the inverse klqp to create their dualskew divergence the idea is tested in the context of neural mt using a model similar to that proposed by bahdanau et al 2015 with results on englishtofrench and englishtogerman wnt 2014 in the context of beam search improvements are small 05 bleu but statistically significant this is an interesting idea and one i certainly wouldnt have thought of on my own but i think it is currently lacking sufficient experimental support to warrant publication the paper feels strangely dated with most experiments on twolayer models and only two citations from 2017 the experiments compare against an inhouse maximum likelihood baseline varying greedyvsbeam search and model depth and against a number of alternative training methods minimum risk scheduled sampling rl with numbers lifted from various papers these latter results are not useful as the authors helpfully point out that the baseline results in this paper are universally higher than the baselines from these other papers furthermore it feels like methods designed to address exposure bias andor bleuperplexity mismatch are not the right comparison points for this work as it does not attempt to address either of these issues i would instead be much more interested to see a comparison to label smoothing szegedy et al 2015 which perhaps addresses some of the same issues and which produces roughly the same magnitude of improvements also the literature review should likely be updated to include edunov et al 2017 in general the improvements are small though technically statistically significant the baseline models are somewhat shallow and the deltas seem to be decreasing as model depth grows so it is hard to get too excited smaller concerns for table 1 it would be helpful to explain why baseline is not equal to beta1 with some effort i figured out that this was due to the alpha term modifying the crossentropy objective when beta1 it would also be useful to tell us what switching point was used for table 1 and figure 2 christian szegedy vincent vanhoucke sergeyioffe jonathon shlens and zbigniew wojna 2015 rethinking the inception architecture for computer vision corr abs151200567 httparxivorgabs151200567 sergey edunov myle ott michael auli david grangier and marcaurelio ranzato 2018 classical structured prediction losses for sequence to sequence learning in proceedings of naaclhlt 2018 ### Summary:
this paper proposes a new loss function that can be used in place of the standard maximum likelihood objective in training nmt models this leads to a small improvement in training mt systems there were some concerns about the paper though one was that the method itself seemed somewhat heuristic without a clear mathematical explanation the second was that the baselines seemed relatively dated although one reviewer noted that this seemed like a bit of a lesser concern finally the improvements afforded were relatively small given the high number of good papers submitted to iclr this year it seems that this one falls short of the acceptance threshold
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper shows that the problem of defending mnist is still unsuccessful it hereby proposes a model that is robust by design specifically for the mnist classification task unlike conventional classifiers the proposal learns a classdependent data distribution using vaes and conducts variational inference by optimizing over the latent space to estimate the classification logits some extensive experiments verify the model robustness with respect to different distance measure with most stateoftheart attacking schemes and compared against several baselines the added experiments with rotation and translation further consolidate the value of the work overall i think this is a nice paper although being lack of some good intuition the proposed model indeed show superior robustness to previous defending approaches also the model has some other benefits that are shown in figure 3 and 4 the results show that the model has indeed learned the data distribution rather than roughly determining the decision boundary of the input space as most existing models do however i have the following comments that might help to improve the paper 1 it would be more interesting to add more intuition on why the proposed model is already robust by design 2 although the paper is designed for mnist specifically the proposed scheme should apply to other classification tasks have you tried the models on other datasets like cifar10100 it would be interesting to see whether the proposal would work for more complicated tasks when the training data for each label is unbalanced namely some class has very few samples would you expect the model to fail 3 equation 8 is complicated and still modeldependent without further relaxation and simplification its not easy to see if this value is small or large or to understand what kind of message this section is trying to pass 4 although the main contribution of the paper is to propose a model that is robust without further defending the proposed model could still benefit from adversarial training have you tried to retrain your model using the adversarial examples you have got and see if it helps docsepin this paper the authors argued that the current approaches are not robust to adversarial attacks even for mnist they proposed a generative approach for classification which uses variational autoencoder vae to estimate the class specific feature distribution robustness guarantees are derived for their model through numeric studies they demonstrated the performance of their proposal abs they also demonstrated that many of the adversarial examples for their abs model are actually meaningful to humans which are different from existing approaches such as sota overall this is a well written paper the presentation of their methodology is clear so are the numerical studies some comments 1 it was not very clear to me that the authors were estimating the px for each y the transition from pxy to px at the end of page 3 was astute and confused me the authors should make it more clear 2 it would be beneficial if the authors could comment on the how strictloose the lower bound of 2 is as it is critical in estimating the class specific densitydocseppaper summary the paper presents a robust analysis by synthesis classification model that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations the architecture involves training vaes for each class to learn pxy and performing exact inference during evaluation the authors show that abs and binary abs outperform other models in terms of robustness for l2 linf and l0 attacks respectively the paper in general is well written and clear and the approach of using generative methods such as vae for better robustness is good pros using vaes for modeling class conditional distributions for data is an exhaustive approach the authors show in fig 4 that abs generates adversarials that are semantically meaningful for humans which is not achieved by madry et al and other models cons 1 the main concern with this work is that it is heavily tailored towards mnist and the authors do mention this scaling this to other datasets does not seem easy 2 using vaes to model the conditional class distributions is a nice idea but how does this scale for datasets with large number of classes like imagenet this would result in having 1000s of vaes 3 it would be nice to see this model behaves for skewed datasets ### Summary:
the paper presents a technique of training robust classification models that uses the input distribution within each class to achieve high accuracy and robustness against adversarial perturbations strengths the resulting model offers good robustness guarantees for a wide range of normbounded perturbations the authors put a lot of care into the robustness evaluation weaknesses some of the shortcomings attributed to the previous work seem confusing as the reported vulnerability corresponds to threat models that the previous work did not made claims about overall this looks like a valuable and interesting contribution
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work presents a study on applying 1 combining pseudolabeling with entropybased label filtering for representation learning and 2 novelnovel and basenovel manifold mixup with entropybased filtering for adapting base representation to novel classes for improving fewshot image recognition tasks under appropriate hyperparameter settings the proposed approach achieves competitive performance on standard fewshot image recognition benchmarks ablation studies are conducted to investigate the gains brought by individual techniques strengths 1 the paper is wellwritten and easy to follow approaches experiment settings and implementation details are clearly described in a way that helps reproducibility of the proposed work experiment results are well organized 2 experiments and ablation studies seem thorough standard benchmarks are used latest works that follow the same experimental settings are included as baselines necessary baselines are included ablation studies included all components of the system 3 experiment results seem to show highly competitive performance without transductive learning weaknesses 1 while it is clear that the proposed approach worked it is not very clear how it worked here are my recommendations for entropybased filtering try to fit the qualitative examples in the main paper show some hard examples selected by mixup maybe something like 09x image a 01x image b which classes often benefit from pseudolabeing mixup and entropy filtering 2 while not necessary it would be interesting to learn about the sensitivity of the proposed approach to hyperparameters the use of different hyperparameters for 1 and 5shot might be a deviation from classical setups although its not entirely surprising that different hyperparameters might be necessary as the dataset size changes it would help if measures are taken to control overfitting to specific benchmarks author response adequately addresses my concerns limitations are addressed adequately docsepthe paper presents a hard manifold mix up process to augment the fewshot samples during finetuning for improving accuracy performance the mixup is carried out in different settings novelnovel novelbase and the hard samples from the mix up based on margin are utilized during finetuning the papers idea of using manifold mix up is complemented by the use of hard sampling for further use during finetuning which is simple and interesting i do not see much improvement from this method in comparison to previous methods in fact in many settings the improvement is negligible such as in table 1 moreover sometimes it is not practical to have access to the base class samples during finetuning stages and the assumption therefore source free finetuning is not practical with this method the pseudolabel creation for base class samples with a classifier initially built on novel target samples is cumbersome as the label set do not match in most cases therefore the filtering step on top of this assumption with entropy criterion does not make sense with a reasonably big domain between base classes and novel classes also many standard benchmark datasets are missing in the evaluation such as tieredimagenet cub domain shift experiments such as miniimagenet to cub docsepthis paper presents felmi a type of mixup method for fewshot learning with marginbased uncertainty criteria the work aims to augment new data in a mixup form to tackle the overfitting problem in a fewshot classification setting with this the presented method consists of six phases overall after pretraining on the base classes and obtaining the pseudo labels of the base examples both novelnovel and basenovel mixup samples are generated for data augmentation the pretraining method employs selfsupervisedbased invariant and equivariant representation learning ier 19 finally the evaluation is presented using three fewshot learning benchmarks in general the idea of the paper is interesting specifically i like how the presented method is inspired by several recent works of base examples and mixtup method for fewshot image classification to present novel dataaugmentation methods additionally the paper is wellwritten and easy to follow however i think the method contains the following weaknesses the overall model is complex having several stages with different criteria the method uses many tricks such as selfsupervised learning for pretraining and active learning for the hard mixtup therefore the methods evaluation and justification became more complicated compared to standard methods such as protonet the proposed method is evaluated on three small datasets but i think some large datasets are required having a 5way classification problem the felmi can not gain significant accuracy in cifarfs dataset i think this is ok but some extra evaluation might help us understand the proposed methods classification gainloss though the presented method is proposed for fewshot learning it is only evaluated with image classification additionally evaluation with large datasets such as tieredimagenet is missing while the presentation of the paper is good i think the authors can improve the related work by clearly discussing the difference between the approaches such as base examples for fewshot learning with the proposed method i think the conclusion can include some limitations and future works too docsepin this paper few shot learning with hard mixup felmi is developed to mitigate the issue of data scarcity the proposed method is composed of 6 steps 1 pretrain the model on the base dataset using the crossentropy loss with auxiliary selfsupervised loss 2 train a linear logistic regression model on the top of the learned feature extractor using the novel data to generate pseudolabels for the base dataset 3 filter the pseudolabels based on entropy 4 do novelnovel and basenovel mixup to generate more data 5 choose hard mixup samples based on the margins in classification probability 6 finetune the model using the filtered base data novel data and hard mixup data experiments indicate that the proposed method leads to improved performance in fewshot learning strengths the paper is well organized and easy to follow the proposed method is straightforward and easy to understand sufficient ablation studies in the experiments weaknesses among the 6 steps in felmi only step 5 is original choosing hard mixup samples however table 4 indicates that step 5 only leads to marginal improvement in fewshot classification accuracy the model is pretrained by crossentropy loss and selfsupervised loss it is known that pretraining with auxiliary selfsupervised loss leads to improved performance in transfer learning for downstream tasks although felmi achieves the best performance in tables 1 2 and 3 it relies on a stronger pretraining method it does not demonstrate the effectiveness of hard mixup compared with the simplest finetuning methods in fewshot learning eg rfssimple eccv 2020 the proposed method is more computationally expensive specifically the proposed method must assign pseudo labels for the base data it becomes an issue when the base dataset is huge its better to provide an empirical or theoretical analysis of the time and space complexity ### Summary:
the submission introduces an approach to fewshot learning called fewshot learning with hard mixup felmi which as its name suggests applies hard manifold mixup as an augmentation strategy for adapting a pretrained model to a small training set of downstream examples the model is first trained on the base classes using a combination of supervised learning and invariant and equivariant representation learning ier then a linear classifier is trained on top of the frozen backbone using the novel classes support set and pseudolabels are generated for the entire base class dataset base class examples are filtered to exclude ones with low pseudolabel entropy using a thresholding hyperparameter featurelevel mixup is applied on basenovel and novelnovel example pairs and the resulting examples are subsampled to the n hardest ones based on the difference in top2 probabilities the model is then finetuned on the pseudolabeled base examples novel examples and hardmixup examples results are presented on two cifar100based fewshot classification benchmarks cifarfs fc100 and miniimagenet in the 5way 1shot and 5way 5shot settings felmi is shown to outperform competing approaches ablation analyses are also presented to assess the contribution of various components on performance improvements reviewers highlight the submissions writing quality and clarity 7gpu 2qz6 whpk opinions are split on how straightforward the proposed approach is with reviewers 3cts and whpk noting its simplicity and reviewer 2qz6 expressing concerns over its many moving parts opinions are also split on the significance of the performance improvements reviewer 7gpu finds felmis performance competitive with competing approaches and reviewers 3cts and whpk are concerned that the improvements are modest the authors respond by emphasizing that felmi is simple and effective but reviewer 3cts remains eager to see a clearer performance gap reviewer 3cts is also concerned that the approach is not sourcefree to which the authors respond that the unlabeled data could also come from another source than the upstream training dataset following the discussions opinions remain divided among reviewers although the majority is either leaning towards or strongly recommending acceptance reviewer 3cts still recommends rejection but is open to an acceptance recommendation i therefore recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a loss to relax the assumption of using a fixed k for topk classification learning the authors use the existing differentiable sorting and ranking operators experimental results also achieve a stateoftheart on imagenet strengths the motivation of this paper is clear to draw k from a probability distribution for training the idea of this paper is pretty novel and exciting which makes the classification model robust the extensive experiments conducted on five data sets are sufficient to show the advantages of the proposed idea weaknesses the details of the differentiable sorting networks is not represented how to rank the predicted scores of the final classification layer and get the probability distribution in figure 1 the first row rank1 are multiplied by 1 and the second rowrank2 are multiplied by 05 please explain the reason this paper derives a family of topk cross entropy losses which is a novel practice the experimental analysis on imagenet including the impact of the distribution and ranking set size m etc is concrete and sufficient docsepthis paper addresses topk classification learning based on the recent progress on differentiable sorting and ranking the author proposes a loss function for topk classification where the k is not fixed but follows a given probability distribution to improve the efficiency a splitter selection network is proposed so that fewer layers are required for the sorting network the proposed loss function can be combined with different sorting methods in experiments the loss function is shown to be effective in training a model from scratch on cifar10 it can also be used in finetuning on imagenet dataset and has performance gain strengths 1 the idea of using different probability distributions for k is interesting the results also demonstrate the effectiveness of this idea 2 the experiments of incorporating different sorting methods are comprehensive weaknesses 1 in my opinion the pk is more like a set of weights rather than a probability distribution if it is the case i recommend improving the descriptions to reduce confusion 2 it would be nice to present an experiment with conditional probability distributions for k of different classes based on their semantic meaning like person animal i think it is also a significant contribution of this paper this paper proposes a flexible loss function for topk classification providing useful insights for image classification so it is worth of reading for the researchers in this area docsepthe paper proposes a method to employ the benefits of differential sorting methods towards topk classification learning the presents several experiments with sampling weights for different ranks and presents the results the loss is used for finetuning in most experiments apart from the cifar100 case the method seems to give minor improvements on the resnext101 3248d baseline strengths the efforts on optimizing the topk classification learning through differentiable sorting appear novel to me the discussion on differential sorting is comprehensive the paper specifically discusses each of the options and how it is optimized for the studied scenario experiments are thorough the work is also interesting because the performance gains come only because of finetuning weaknesses the second term and eqn 2 would be constant with k5 if only the top five rows of the p matrix are constructed expanding eqn2 for the given example in fig1 the loss would be log05 03 05 0306 assuming panda is the ground truth class consider a case of pk 05 0 0 0 05 the equation would be log 05 top1 05 top1top2top3top4top5 if only five columns are reconstructed and if they are column stochastic then the sum of top1 to top5 would always be 1 then the second term will always give a constant value requesting the authors to clarify this aspect at first it appears that the distribution would be a sample however fixed distribution is used for a set of experiments for example it is either 05 0 0 0 05 or 02 02 02 02 02 for the entire experiment hence presenting it as sampled is confusing the best results come when you have the top1 and the sum of the top five values hence the initial discussion and intuition can be improved a bit the improvements on noisy student efficientnetl2 are negligible 8835 to 8836 is certainly not statistically significant were experiments for table1 were also ran 10 times like table 2 please mention the number of rows that were reconstructed for each experiment the number of columns m is mentioned in the experiments but not the number of rows berrada et al was used to train the model from scratch it would be worth comparing their loss for finetuning purposes as well i think that would be a fairer comparison although the paper brings several novel perspectives there remain several ambiguities as well some additional experiments clarifications can also strengthen the draft overall in the current form the paper is a borderline one and the final decision will depend a lot on the discussion during the rebuttal phase docsepthe paper proposes a differentiable loss for topk classification based on differentiable sorting networks ie sorting neural networks in which basic minmax operations are replaced by smoothed versions ie softmaxsoftmin the main principle is to use the sorting network to estimate the probability of the rank of each class and then filter only the topk an extension consists in considering that k can take several possible values at random eg 50 chance of being 1 and 50 chance of being 5 the resulting loss is experimented on three datasets cifar100 imagenet1k and imagenet21kp and with three existing sorting networks performances are mainly compared to crossentropy showing low improvements strengths setvalued classification is an important topic to cope with class ambiguity few works only one as far as i know 1 have proposed a topk loss for neural networks and there is room for improvements the proposed approach is different from 1 as it relies on sorting networks to determine the set of the most likely classes rather than a purely topk objective weaknesses a first weakness is that the contribution is quite incremental and not well justified from a theoretical point of view using sorting networks for topk is an acceptable strategy from a practical point of view but a bit overkill and not very new from a theoretical point of view the proposal to use several values of k is also not really justified the principle of topk is to predict sets of fixed size contrary to other setvalued classification approaches that attempt to solve other objectives eg adaptive set sizes but equal to k on average we let the authors refer to 2 for a clear overview of the different objectives here the objective is not really clear if k is supposed to be a random variable eg 50 chance of being 1 and 50 chance of being 5 that means that for the same image x the classifier is supposed to return randomly either one class or 5 classes without any consideration with regard to the image content itself another main weakness is that no significant improvement of the proposed loss over crossentropy is shown the reported topk accuracy gains are not systematic and so low that they may be not statistically significant as a first step towards a better understanding of the results the authors should first compute some significance tests eg pvalues on several runs and a clear crossvalidation procedure for model selection among epochs but even so it wont resolve the fact that the performance gain is observed only for some specific configurations eg a specific sorting network and specific values of k probabilities and remains very low even in such advantageous conditions 1 berrada l zisserman a kumar m p 2018 smooth loss functions for deep topk classification arxiv preprint arxiv180207595 2 chzhen e denis c hebiri m lorieul t 2021 setvalued classificationoverview via a unified framework arxiv preprint arxiv210212318 an interesting attempt to improve the topk classification but consistent limitations i an incremental contribution and no clear justification of considering k as a random variable ii no significant improvement of the proposed loss over crossentropy ### Summary:
the main consensus among the reviewers was that although the approach is interesting this submission suffers from two main weaknesses the methodology is not very novel and the proposed parts of the method not well justified in particular regarding the interplay of a differentiable sorting approach and of the random choice of k the results compared to a standard crossentropy loss are not very convincing there does not seem to be a statistically significant advantage
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper considers a continuous version of the classical hopfield network hn modelin contrast to well studied discrete models where the patterns vectors that are stored are discrete this paper studied continuous vectors and a new continuous energy function convergence results to a fixed point are proven for the new rule and it is shown that for the case of random patterns the hopfield network can memorize exponentially many patterns with high probability  finally several implementations are given showing how incorporating the new hopfield net in classification tasks can improve classification accuracy in regimes where data is scarce and where neural networks do not fare well the paper is rather long and i did not verify all results the description appears soundthe proofs appear nontrivial and rather technical while the results here are nontrivial i was left me wondering about the added value of this new model one of the biggest advantages of hn was its simplicity and elegance more recent results of hopfield and others with higher degree energy functions managed to maintain this clarity and brevity the new model however is significantly more involved it was not clear to me what is gained by this greater complexity and whether the gains justify the larger complexity in actual implementations very limited precision is often necessaryhow does this discretization influence the continuous model how robust is it to rounding errors dont we get old discrete models in disguise the impressive empirical results raise similar questions cant we use old discrete hn instead of the new model and achieve similar results it would be perhaps more informative to compare different hn to the new model presented in this paper it seems a bit strange that previous uses of hn discrete did not achieve such an improvement in previous studies it would be beneficial to add more on related work in this area the authors might consider breaking their long paper to two different sections one presenting the theoretical advantages of their new model and the other focusing on practical benefits finally the nature of convergence to a fixed point wasnt clear to me it seems likely that if patterns are not random convergence can take a long time as is the case for discrete hn some recent work about the complexity of finding fixed points of continuous functions may be relevant herea converse to banachs fixed point theorem and its clscompleteness more specific comments 1 the paper starts with a rather lengthy discussion of previous work i would recommend outlining the contributions of this paper earlier on 2 converge in one update step with exponentially low error and have storage capacity proportional to it was not clear to me that random patterns are considered here 3 proven for c 137andc 315 in theorem 3 for what c exactly is the result proven 4 furthermore with a single update the fixed point recovered with high probabilityi presume this is true for random patterns 5 is beta0docsepthe paper introduces a new hopfield network which have continuous states and propose update rules for optimizing it it also draws connections between the new model and attention mechanism used in transformers small scale empirical study is presented overall i like the technical contribution of the work but feel the paper could be revised to improve clarity about the optimization in the new proposed variant of hopfield networks below some specific comments pros connecting hopfield networks to attention mechanism and drawing out the variants in section 3 as hopfield layers is useful the exposition in section 1 and 2 where the authors describe the hopfield network with continuous states is written well although i do feel the motivation behind update equations could be explained a bit better cons as i mentioned earlier i dont fully understand the intuition behind convergence in one update can the authors clarify this also the paper mentions update rule in eqn 5 converges after one update for well separated patterns what happens to the updates optimization when the patterns are not well separated this should be discussed after equation 5 maybe present different scenarios to make it clear empirical study is limited in my opinion and can be improved is the trend in fig 2 observed across more or less across all datasets can the authors comment on this i like the visualization in the figure but it is bit hard to interpret perhaps a more clearer label for it could help with that other comments the idea of separated patterns leads me to ask this question is there any connection of this work to maxmargin classifiers kernel methods did the authors consider what would happen if nonlinear transformations eg activation functions in dnns are applied on top of the inputs how does the existing network change in that case can the authors comment on the utility challenges in applying their proposed method on datasets tasks beyond the small scale uci datasets used in their experiments eg using them in large scale language modeling tasks where transformers are popular right now docsepthis work extends the binary hopfield network demircigil et al 2017 to continuous patterns and states connections are drawn between the result model to the attention layers of the transformers the pooling operation of lstm similarity search and fully connected layers experimental results are briefly described for analyzing the attention of bert models multiple instance learning and small uci classification tasks the proposed model seems very interesting and the proposed applications seem reasonable at a very high level however there is just not enough detail in this paper for me to understand how the models are implemented or why the model works better than other approaches for example section 3 declared 3 types of hopfield layers but without any formal definitions to them or how they are integrated to the proposed models the experiment section compares performances with existing models but lacks any analysis of why the proposed models work better similarly there is a lack of motivationintuition in the introduction section after author feedback thanks for the paper update and now i have a better understanding of the proposed approach i have updated my review to the following previously widrich 2020 showed that integrating transformerlike attention or equivalently modern hopfield networks based on softmax into deep learning architectures outperforms existing methods knn and logistic regression for massive mil such as immune repertoire classification more specifically a pooling layer can be formed by attending over a repertoire of instances with a fixed but learnable query vector this work provides theoretical analysis of such a layer for its energy function convergence of updates and storage capacity and points to directions of how such a layer can be understood and controlled it extends the previous experiment 1 apply hopfieldpooling attention with fixed learnable query q to more mil datasets animal image and breast cancer and achieve state of the art results 2 apply hopfield attention to 75 small uci benchmarks replacing feedforward nets here selu units klambauer 2017 are used to map input to storage y and query r the result is quite positive beating previous approaches including svm random forest and snn klambauer 2017 3 apply hopfieldlayer attention with fixed training data y as storage to 4 drug design tasks acting as an instancebased learning approach the result seems quite interesting indicating that general purpose layers such as feedforward pooling and nearest neighbors can be improved in terms of robustness learnability or controllability by adding attention like operations i think the paper can talk less about existing results and focus more on the new results and their analysis remove immune repertoire classification result since it is from previous work move the drug design experiment details to the main text and add some comment about under what condition hopfield outperformsunderperforms rf for the uci benchmark experiment the transformer layer vaswani 2017 seems to be a natural baseline and should be compared to suggestions for the presentation should only in the future work section state that hopfield can potentially substitute lstms or grus since it is all hypothetical with no experiment result at this point the word implemented in section 4 seems misleading as there is nothing changed in the bert model structure transformer and bert models can be implemented by the layer hopfield can be more specific in descriptions for example in the description of 2 layer hopfieldpooling and 3 layer hopfieldlayer in section 3 r and wk can be referenced again for state query patterns and the stored key patterns respectively it is probably more informative to replace figure 1 with a table to directly compare the energy function and updating rules of different hopfield netsie classical exponential and attention avoid using x in equation 1 since the symbol has already been used for the stored patterns hopfieldlayer seems to be a very strange name ### Summary:
the novelty of the paper are introduces a new hopfield network with continuous states hence can be learned endtoend differentiation and back propagation derives efficient update rules reveals a connection between the update rules and transformers illustrate how the network can be used as a layer in deep neural network that can perform different functions the presentation was clear enough for the reviewers to understand and appreciate the novelty although there were a few points of confusion i would recommend the authors to address several suggestions that came up in the discussions including additional analysis to highlight when and how the networks is able to outperform other competing models intuitions about the proofs for the theorems okay to leave the detailed derivation in the appendix
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work tackles the problem of learning linear sorting functions with bounded noise under gaussian martingales the proposed algorithms enjoy strong theoretical sampling guarantees and a polynomial runtime for both the normalized kendalls tau distance and the topr disagreement loss strengths presentation the problem is well introduced and the main results are clearly presented impact the results established seem to be of general interest in addition to solve the label ranking problem the paper is technically sound weaknesses no experimental results limit the impact of the work clarity although the first two sections are very clear the second half of the paper feels harder to follow it does not feel clear to me whether the stated algorithms are solutions to the problem with kt distance or with topr disagreement or both the theoretical limitations are adequately addressed the authors state that the potential negative societal impacts of their work is na due to its theoretical nature it might still be valuable to mention what could go wrong if the suggested algorithms were actually deployed docsepthis paper is the first to study the problem of learning linear label rankings in the presence of noise in the label ranking problem we are given access to samples of the form xy where x in mathbbrd and y is a permutation of the sequence 1 2 3 ldots k for example this can correspond to a ranking of movies by preferences of a particular user in a movie recommendation system in the linear label ranking problem there is an additional constraint that the ranking should be such that it can be formed by the indices corresponding to a descending sort of the entries of wx for some matrix w in mathbbrk times d further in the noisy linear label ranking problem we are given access not to pure samples from a linear label ranking distribution but instead samples whose labels are corrupted by some noise this paper also assumes that the marginal distribution of x needs to be gaussian they provide two algorithms one improper and one proper for learning with error bounds in the normalized kendall tau kt distance they also provide an algorithm with error bounds in the topr disagreement metric in particular their improper learning algorithm in the kt distance uses algorithms for learning linearthresholdfunctions ltfs in the massart noise model as subroutines originality i am not an expert in this area so i am not entirely sure about other related work the proposed algorithms and getting them to work as in proving guarantees for them are quite nontrivial and so the paper is quite original in my opinion quality the submission is technically sound all claims are wellsupported with proofs clarity the submission is clearly written and wellorganized significance the paper is the first to study a very natural problem and so i think it is quite significant ranking functions have many applications and developing robust algorithms for learning ranking functions can have good practical impact on the theoretical front these problems are also clearly of interest to the neurips community as mentioned on page 2 of the paper the case of k 2 captures the problem of learning halfspaces with massart noise the best paper award winner of neurips 2019 was on this topic this is primarily a theoretical paper and so the authors have mentioned that it doesnt have any negative social impact docsepthis paper considers the learning of linear sorting functions under gaussian marginals in presence of bounded noise in the special case k2 the problem reduces to the wellstudied learning of halfspaces with massart noise the author generalized the problem setting and provided efficient algorithms with respect to kendalls tau distance and topr disagreement loss the work makes a significant contribution by proposing the first efficient algorithm for learning of lsfs with bounded noise the basic algorithmic ingredient is an efficient learner zsa20 for the class of halfspaces for the special case of k2 however the algorithm is generalized to any k improperly and is further used to obtain a proper learner using the ellipsoid method when the error is measured by topr disagreement loss the proper learner also achieves improved sample complexity comparing to a naive invocation of the improper learner the paper is very wellwritten with technical highlights appropriately placed and the analysis is sound the work does not have negative social impacts docsepthe setup is the following there is an unknown k x d matrix w and a player observes a feature vector x in rd and a ranking sigmax ie a permutation over k generated as follows the feature vector x is sampled from a ddimensional standard gaussian then the permutation sigmax over k is generated by sorts the indices of wx in decreasing order the goal is the learn a matrix w which approximates the label ranking in particular we want that with high probability over a fresh x drawn from a ddimensional standard gaussian sorting wx gives a permutation which is very close to that of wx the paper studies two notions of closeness kendaltau distance and topr distance the kendaltau distance kt between two permutations is the fraction of pairs ij in k where their relative order agrees in the learning setup this corresponds to saying that with high probability over x with high probability over a random ij from kxk the relative order of wxi and wxj agrees with wxi and wxj while the kendaltau distance is wellstudied it is perhaps less motivated in ranking setups where one is more interested in higher ranked elements in settings where higher ranked elements are more important the paper studies the topr distance this is a 01 distance based on whether the top r ranked elements are exactly the same in exactly the same order while exact versions of the above are relatively simple an algorithm using linear programming can find w there is some noise in what the player observes in particular the player observes a draw from a distribution which is promised that each pair disagrees with the groundtruth ordering with probability at most eta where eta 12 results 1 a polynomialtime algorithm for learning w in kt distance from od logk eps 1 2eta6 up to distance eps 2 a polynomialtime algorithm for learning w in topr distance from od k r eps 12eta6 up to distance eps important remarks the noise model is arbitrary as long as it has marginals on pairs which are different with probability at most eta this along with the fact that sorting functions are linear makes the problem a similar of learning halfspaces with massart noise because of this connection the assumption that x is gaussian is somewhat necessary because there are superpolynomial lower bounds in the statistical query model the algorithm proceeds in three steps first a reduction from a ranking to ok2 binary comparisons second an improper learner which aggregates the ok2 binary comparisons third an algorithm which uses the intermediate steps of the improper learning to output a hypothesis w while the first and second steps are known and have appeared in the literature before the novel aspect of this work is finding the matrix w to do this the paper proves two interesting geometric lemmas relating the angles between proposed rows of w and w with the corresponding kt and topr distance strengths the paper studies a natural problem in learning rankings the problems seem like natural extensions of learning halfspaces with massart noise and a good model for learning rankings with noise from a technical perspective the approach is natural and the geometric lemmas interesting the paper is also wellwritten weaknesses i dont really see any strong weaknesses in the paper the work is purely theoretical at this point and seems to have no potential negative societal impact ### Summary:
the reviewers are unanimous in their strong positive opinion on this paper the authors have given the first efficient algorithms for learning noisy linear sorting functions with theoretical guarantees a relevant and useful problems setup for the neurips community the reviewers consider the paper clear and wellpresented and thus this is a natural accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary of paper this paper presents an approach for quantising neural networks such that the resulting quantised model is robust to adversarial and random perturbations the core idea of the paper is to enforce the lipschitz constant of each linear layer of the network approximately close to 1 since the lipschitz constant of the neural network is bounded by the product of the lipschitz constant of its linear layer assuming lipschitz 1 activation functions the lipschitz constant of the trained neural network is bounded by 1 this results in a model which is robust to adversarial and random noise ad all directions in the model space are nonexpansive algorithmically controlling the lipschitz constant is achieved by using the orthogonal regulariser presented in the paper cisse etal which has the same motivation for this work but for standard neural network training but not quantising the authors presents thorough experimental study showing why standard quantisation schemes are prone to adversarial noise and demonstrate clearly how this approach improves robustness of quantised network and sometimes even improve over the accuracy of original model review the paper is well written with clear motivation and very easy to follow the core idea of using orthogonal regulariser for improving the robustness of neural network models have been presented in cisse etal and the authors reuse it for improving the robustness of quantised models the main contribution of this work is in identifying that the standard quantised models are very vulnerable to adversarial noise which is illustrated through experiments and then empirically showing that the regulariser presented in cisse et al improves the robustness of quantised models with rigorous experiments the paper add value to the research community through thorough experimental study as well as in industry since quantised models are widely used and the presented model is simple and easy to use some suggestions and ideas 1 it will be great if the authors could add a simple analytical explanation why the quantised networks are not robust 2 the manifold of orthogonal matrices does not include all 1 lipschitz matrices and also the orthogonal set is not convex i think a better strategy for this problem is to regularise the spectral norm to be 1 regularising the spectral norm is computationally cheaper than orthogonal regulariser when combined with sgd using power iterations moreover the regulariser part of the model becomes nice and convex 3 another strategy to control the lipschitz constant of the network is to directly penalise the norm of the jacobian as explained in improved training of wasserstein gans gulrajani et al docsepsummary the paper proposes a regualrization scheme to protect quantized neural networks from adversarial attacks the authors observe that quantized models become less robust to adversarial attacks if the quantization includes the inner layers of the network they propose a lipschitz constant filtering of the inner layers inputoutput to fix the issue strengths the key empirical observation that fully quantized models are more exposed to adversarial attacks is remarkable in itself and the explanation given by the authors is reasonable the paper shows how a simple regularization scheme may become highly effective when it is supported by a good understanding of the underlying process weaknesses except for observing the empirical weakness of fully quantized models the technical contribution of the paper seems to be limited to combining the lipschitzbased regularization and quantization has the lipschitz technique already been proposed and analysed elsewhere if not the quality of the paper would be improved by investigating a bit more the effects of the regularization from an empirical and theoretical perspective if yes are there substantial differences between applying the scheme to quantized models and using it on fullprecision networks it looks like the description of the lipschitz method in section 4 is restricted to linear layers and it is not clear if training is feasibleefficient in the general case questions has the lipschitz technique been proposed and analysed elsewhere is the robustness of fullprecision models under adversarial attacks also improved by lipschitz regularization how popular is the practice of quantizing inner layers has the performance of fully quantized models ever been compared to fullprecision or partially quantized models in an extensive way beyond adversarial attack robustness are the adversarial attacks computed using the fullprecision or the quantized models would this make any difference the description of the lipschitz regularization given in section 4 assumes the layers to be linear does the same approach apply to nonlinear layers would the training be feasible in this case docsepimho this manuscript is clearly written addresses a confusing point in the current literature clarifies some issues and provides a novel and useful approach to mitigate those issues reading the other comments online the authors seem to have addressed those concerns as well ### Summary:
the reviewers agree the paper brings a novel perspective by controlling the conditioning of the model when performing quantization the experiments are convincing experiments we encourage the authors to incorporate additional references suggested in the reviews we recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes an algorithm for computing an approximation of the posterior and marginal likelihood by analysing the sequence of programs using neural networks as well as a metaalgorithm for learning the network parameters over a training set of probabilistic programs experiments demonstrate the feasibility of the metaalgorithm for learning inference algorithms that generalise well to new but similar programs these learnt algorithms were sometimes found to outperform alternatives in terms of timeefficiency there were quite a few terms that i was not familiar with for example what is the state of an algorithm i did not find a formal definition of this term in the paper in the definition of the infer function what does the keyword in mean it did not seem clear how well the whitebox inference algorithm performs when the number of commands in the program is very large i am not at all familiar with probabilistic programming this does look like a serious piece of work though i do not know how novel it is or whether the claims in the paper are correct docsepthe paper presents an algorithm for composing inference algorithms out of simpler neural net building blocks one per unique statement type in the probabilistic programming language presented the language is simple without recursion or loops reducing issues due to feedback problems from the approximation the networks are trained using hmc or importance sampling samples from many programs in a similar space there is an empirical study of several classes of small gaussian probabilistic programs the paper claims that the learned inference algorithm works well for tasks which are similar to the training problems but the notion of similarity is not fully defined nor is there an example of how the system fails when applied to a dissimilar program are there diagnostics for checking the output when the model is applied to a program which is too dissimilar the experimental study tests performance within a family of programs each using small neural networks to infer each program statement does training on all families of programs allow the system to make accurate inferences on any of those families does it allow it to generalise across families without some notion of how the system generalises im not sure when i would choose to use this rather than running hmc on my program given a single hmc run will be faster than training the neural networks on multiple hmc runs for different programs close to the program of interest the experiment in section 53 shows that the system is around 2x faster than importance sampling from the prior but this doesnt take into account the time necessary to train the neural nets nor the time taken for the importance sampling runs used to generate the training data how are losses propagated through the program if each neural net is 3 layers then programs with 10 statements have at most 30 layers which is usually past the point where some amount of regularization or normalization is necessary to stabilize training or prevent vanishing gradients could the authors comment on the stability of training whats the failure mode when the test loss diverges is it detectable without having hmc or other high quality samples how robust is this approach to differing choices of neural net architecture the paper uses a 10 dimensional state space when parsing the program but its not clear how this value should be modified as the number of latent variables or the program complexity changes overall the paper is well written and explained and the experimental study is detailed for the areas it covers the paper presents a learned inference algorithm but its not clear how it generalises either across program types which is necessary to amortize the training cost wrt hmc or across neural network architectures eg changing the internal state space in response to increased program complexity additionally its noted that occasionally the test error diverges but theres no discussion of how to detect this in practice if the system was used for inference docsepthis introduces a metalearning algorithm for learning inference algorithms applicable to any probabilistic program this is accomplished by associating a neural network with every grammar rule of a probabilistic programming language and outputting posterior draws this generalisation is possible because each neural network component is feed marginal likelihood information for each ppl instruction this work is very interesting and novel its a unique attempt to learn a general inference algorithm i think meta learning is something that is a good fit for many bayesian approaches and i want to see more work like this im curious about the expressivity of the language the grammar suggestions a modelling language that consists of sequence of commands i dont see how this language would be able to express recursive programs it seems one would need something like a label and jump commands to accomplish that this is admitted in the appendix but not really acknowledged in the main text i think the main paper should reflect the present limitations and not over promise the existing contribution i have some concerns about the experiments some examples are fairly simple and the results for the more substantial ones are not shown like hierd and rb the test losses look fairly bad for the experiments that are shown so im not fully sure generalisation has been demonstrated if the issue is a few bad generated programs maybe show median loss the results in figure 4 seem to be for during training but feasibility presumably requires similar results on unseen programs at test time i also worry about correctness as the neural networks are used as is and not as a proposal how much can we trust the posteriors that come out of this method is there anything that can be said about the learnt posterior it seems right now that the learned distribution can recover the mean of the true parameters and maybe the variance some of the writing is slightly sloppy for example the phrase socalled static single assignment assumption is used i dont know what that means but i do know there is a static single assignment intermediate representation that exists within many compilers i think thats what the paper meant to refer to related work should cover how this approach differs from stites et al httpsarxivorgabs210300668 i recommend this paper for rejection while i think the approach has the potential to work right now its very hard to get a sense of what was learned by the meta learning algorithm how well any of this generalises when model structure or even observed data changes significantly or even if the language is too restricted to make this is a significant enough contribution docsepthe paper proposes a new restrictive class of probabilistic programs with fixed number of random variables and without loops the authors then propose an inference technique that learns the parameters of a neural network for sampling from the programs posterior distribution by composing it from individual neural networks for each atomic command in the language this technique is shown to perform well during inference once the neural network has been trained on training programs at the very least the inference speed is shown to be very high the main drawback in this work is a lack of novelty the use of neural networks to train a proposer for a model is not new while the authors attempt to cast their work as something different than ic this doesnt quite come out the claim that a neural network for one program can be used for a different program even though the neural network takes a one hot encoded representation of the variables in the model is hard to see a clear technical statement of what kind of cross model generalization is possible is needed the paper shows results across model structures where the dependency graph and the position of a function changes but the number of variables is the same it is not clear why ic cant deal with this minor variation in the same probabilistic program these models are so simple how hard would it be to train ic on these models and then do inference i would like to have seen ic results in this paper to believe that this work is different the language chosen is not a universal ppl i cant follow how this can be used as an intermediate language by a compiler for a universal ppl as claimed on page 3 please show an example of how a program with unbounded random variables can be compiled into this language the inference in this paper looks like mean field variational inference which makes me wonder whether hmc is really such a good comparison please show some comparisons to vi in stan this would be trivial to run since you are already running hmc the models are very simplistic with no discrete variables and no multimodal posteriors it is not a meaningful claim to make footnote on page 9 that the inference algorithm provides a good coverage of the posterior by covering all the modes the posteriors shown for multimodal models should at least look multimodal variational distance or symmetric kl divergence results would be needed to make claims about correctness of the posterior regarding ess per second results these can be misleading the algorithm might have a cap of ess for example it would be better to run your algorithm for the same duration as hmc and show higher ess numbers the claim that the paper provides generalization of compiled inference across models is not supported by the description or the simple examples these appear to be covered by existing work on inference compilation the focus on a very restricted class of ppls makes this work very limited i dont believe i learned anything from this paper ### Summary:
the paper presents a metaalgorithm for learning a posteriorinference algorithm for restricted probabilistic programs while the reviews agree that this is a very interesting research direction they also reveal that there are several questions still open one reviewer points out that there learning to infer should take both the time for learninginference and the generalization to other programs into account ie what happens if the program is too different from the training set is benefit than vanishing moreover as pointed out by another review recursion as well as while loops are not yet supported also the relation to ic needs some further clarification these issues show that the paper is not yet ready for publication at iclr however we would like to encourage the authors to improve the work and submit it to one of the next ai venues
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents an important and interesting approach on fully decentralized marl fully decentralized qlearning is highly applicable to realistic and realworld applications the method is evaluated extensively showing a great potential strengths theoretical analysis extensive evaluations interesting perspective to the marl problem with potential realworld applicability weaknesses missing out on some related work on fully decentralized marl lack of sota baselines yes docsepthis paper proposes a new marl algorithm under the dtde paradigm specifically the proposed algorithm i2q is introduced based on the ideal transition probability where each agent assumes that the others adopt the optimal actions for each decision and a previous idea named qsslearning theoretical guarantee on the convergence of the proposed algorithm is provided under certain conditions for experimental studies the significant superiority of i2q is demonstrated in matrix games mpe ma mujoco and smac pros this paper is clearly written the experimental part is relatively diverse and adequate nbsp cons the novelty of the proposed algorithm is limited nbsp minor issues and typos l299 samc smac the rationality and limitations of the main assumptions adopted in this paper are discussed in sec34 docsepthis paper presents i2q an algorithmic approach for decentralized marl the authors present the nonstationarity problem in this setting and propose to use ideal transition probabilities to solve it particularly these are transition probabilities for which all agents are ensured to converge to an optimal solution when trained in a decentralized manner the authors then propose to use the next state in deterministic environments as a representation of an action and show that it induces an ideal transition probability which ensures convergence to an optimal solution they experiment on many baselines in various domains showing the benefit of their approach the paper proposes an elegant solution to the nonstationarity problem of decentralized marl im not able to say if it is the first method to solve this problem and i hope one of the other reviewers will address this the paper is clearly written and the presentation is great also i found everything to be easy to read and follow finally the experiments section seems to have chosen a wide variety of tasks and im glad the authors also chose to show results on the high dimensional problem of scii the paper doesnt have strong flaws but there are some issues that make it a borderline paper for neurips first the theory is not very deep there are many questions that remain open that the authors dont address theoretically and i think are important for a better understanding of the problem one of these is convergence proof of i2q which the authors dont really prove but only discuss informally second i feel that the deterministic assumption in the paper is a strong one unless carefully addressed in favor of the authors they do discuss this in the paper showing a result of the value gap and also experiments on a wide variety of tasks still i believe this is not adequately addressed a stronger result for stochastic environments should be provided i assume there exist some ideal transition probabilities for this setting if it is the case that such are impossible to theoretically find then this is an important point to address in the paper overall i find theorem 3 to be a trivial result i wish to see an approach that tackles stochasticity explicitly and provides a tighter bound for approximation errors third the fact that i2q must learn a forward model is troubling as modelbased methods usually fail againt state of the art modelfree methods on high dimensional tasks unless latent spaces are used such as in muzero the authors dont address the problem of estimating f in their work moreover i feel that this is not addressed fully in the experiments either finally while the experiments show results on different types of environments i find that i2q was not compared against enough baselines there are a lot of new baselines on marl and particularly i would expect the authors to compare i2q to at least three more baselines which are considered sota and not only iql even if they are not decentralized strengths 1 a new solution for decentralized marl 2 proofs to formal statements seem correct 3 paper is clearly written and presentation is great 4 experiments show a variety of interesting tasks weaknesses 1 theory is weak 2 stochastic environments should be addressed 3 forward model should be addressed theoretically and in experiments 4 experiments are lacking comparison to other algorithms the authors discuss limitations of their work some of these limitations coincide with points ive already raised as mentioned above i believe some of these points should be addressed more thoroughly in the paper ### Summary:
the paper presents a novel method for dealing with nonstationarity in decentralized multiagent reinforcement learning marl while there are some concerns about the level of novelty the approach is interesting and presented well there are also concerns about the discussion and comparison with the stateoftheart in decentralized marl methods we suggest the authors include comparisons to other decentralized marl methods such as the ones below or state why such comparisons are not reasonable omidshafiei shayegan et al deep decentralized multitask multiagent reinforcement learning under partial observability international conference on machine learning pmlr 2017 palmer gregory et al lenient multiagent deep reinforcement learning proceedings of the international conference on autonomous agents and multiagent systems 2018 lyu xueguang and christopher amato likelihood quantile networks for coordinating multiagent reinforcement learning proceedings of the 19th international conference on autonomous agents and multiagent systems 2020
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors propose a varianceinvariancecovariance regularization technique for selfsupervised learning the loss function used in the paper consists of three terms the invariance term encouraging samples with different view to have similar embedding the variance term which is a hinge loss on the variance of the embedded variables this is the main contribution of the paper and the authors claim that it helps to avoid variance collapse and a covariance term which borrows from the previous work barlow twin the proposed method has greater flexibility for siamese architecture design such as not requiring batchnormalization and weightsharing which the authors claim opens the door for multimodal signal embedding experiments and ablation study have been conducted to demonstrate the performance of the proposed components strengths the authors did a very good job in explaining the background and presenting the paper the main idea is conveyed very clearly the idea of adding a variance term to the total loss to avoid representation collapse is interesting intuitive and novel a great number of experiments compared with prior methods with detailed set up have been conducted ablation analysis has also been conducted showcasing the effects of different components a study on multimodal signal representation learning is presented demonstrating the importance of not requiring architecture or weight sharing in two branches weakness it seems that the main contribution which is the variance term plays a somewhat insignificant role in table 1 and table 2 in fact compared to barlow twins which does not have the variance term the proposed method in many cases actually underperforms not requiring shared weight between different branches is a feature of barlow twin as well can the authors provide an explanation on the inferior performance of barlow twin in table 3 and table 5 the authors mentioned that using standard deviation instead of the variance in the hinge loss is important can a toy numerical example be provided to showcase the presence of representation collapse when variance is used the paper is easy to understand and has its contribution and novelty many experiments have been conducted but theory is a bit lacking i am willing to increase my rating if the authors can respond to my comments docsepthis paper combines three objective functions for the selfsupervised visual pretraining on imagenet 1 the alignment between the two different views of an identical image which is very common for existing methods 2 the covariance term to bring the offdiagonal coefficients of the features covariance matrix to zero which is modified from the barlow twins 3 the variance term that defines a hinge function on the standard deviation of embeddings along the batch dimension for every specific dimension of the feature projections to the best of the reviewers knowledge such objective function is firstly applied for the visual pretraining in this paper although the same measure has been used to analyze the model collapse problem eg in the paper of simsiam but not be designed as a specific pretrained loss function strengths 1 the paper is wellwritten and easy to follow 2 the method is simple and achieve comparable performance for both linear evaluation and downstream transferring 3 the authors provide a clear and detailed discussion to compare this work with the previous methods weaknesses 1 the reviewer does not feel very excited about the work in fact the three loss functions are not very novel as the reviewer mentioned in the summary the covariance term is just directly modified from the barlow twins the same measure of the variance term has been used in some previous works eg simsiam to analyze the model collapse problem while it is not designed as a pretrained loss function 2 in the table 1 the comparison with previous methods might not be very fair in particular some compared methods such as moco v1v2 simsiam and infomin are just pretrained for 800 epochs while the proposed model is pretrained for 1000 epochs besides some of the previous methods do not use lars optimizer and warmup strategy that are applied in this work 3 while the proposed method is simple however the computation time of the covariance matrix is quadratic in terms of the feature dimension which slow the pretraining significantly 4 although the authors have provided detailed discussions to illustrate the differences of this work with previous works in terms of the design details however can the authors elaborate theoretically on the advantages of the variance and covariance terms against the whitening operation in wmse 5 besides resnet50 it will be more beneficial to the community if the authors can compare the proposed method with the moco v3 by showing the performance with the transformer backbone overall the reviewer tend to vote for accept for this work since the proposed method is simple and it has conducted thoughtful experiments to demonstrate the effectiveness the reviewer encourages the authors to speed up the proposed method make the comparison with previous methods fairer and try to test the method on different architecture docsepthe paper propose a new selfsupervised method new loss is designed to explicitly avoid collapsed solution advantages 1 authors give an explicit loss function to deal with the collapsed solution problem which is understandable and explainable compared with byol and simsiam and the design of minimizing standard deviation for each dimension is insightful 2 the application of minimizing variance and covariance to other methods especially simsiam is interesting which can help people understand the mechanism of how negativefree methods work 3 wellwritten and easy to follow comments 1 the invariance term and covariance term seems a decouple version of barlowtwins so i thought the main difference is the variance term however from the results it seems that vicreg does not bring extra improvements compared with barlowtwins it is not clear that what kind of problem authors aim to solve if the variance term is the key it will be better to show the std of barlowtwins features and give more analysis of why the combination of varianceinvariancecovariance is advantageous 2 authors emphasize that one of the advantages of vicreg is it does not require the weight sharing it is indeed the vicreg can work without siamese network design but the property maybe not a exclusive advantage of vicreg according to my understanding simclr barlow twins can also work with two different architectures i thought authors should also compare with these method in the setting of nonshared architectures 3 about the esc50 experiments it is not clearly that why vicreg perform much better than barlowtwins in this experiment and i can not find details in the paper that whether barlowtwins also use the multimodal data because i believe that barlow twins can also work with different architectures so it is important to figure out why vicreg perform better 4 table 4 shows the effect of variance term and covariance term on different method but missing barlowtwins i believe the effect of variance term on barlowtwins is a key experiment to compare the varianceinvariancecovariance framework is insightful but the experiments are not so convincing docsepthe paper proposes a novel objective function for selfsupervised representation learning the objective function consists of three terms the invariance the variance and the covariance terms the invariance term drives representations to be invariant to input transform the variance term ensures each dimension of the representation has enough variability and the covariance term inhibits coadaptation of dimensions the proposed objective function shows competitive performance to existing selfsupervised learning techniques strengths the overall exposition of the paper is clear and easy to follow the proposed method is simpler than the previously proposed selfsupervised learning techniques it is agreeable that the variance and covariance terms prevent the collapse of representations the ability to handle the heterogeneous encoding networks seems to be a meaningful improvement the proposed method requires a moderately sized batch of 2048 weaknesses it is unclear that the collapse of representations the main problem tackled by the paper is the major bottleneck in selfsupervised learning the experimental results presented in table 1 and table 2 are okay but not pushing the boundary of selfsupervised learning while table 3 and table 5 showed that vicreg is more suitable for using heterogeneous encoders the necessity of heterogeneous encoders is not demonstrated very clearly because the setting is not practical the performances reported in table 3 are far from the stateoftheart and in table 5 the shared weight setting performs best a more natural setting such as representation learning for multimodal data as in vse 1 should be investigated the contributions of the variance term and the covariance term are not well analyzed table 4 is supposed to show the contributions but it lacks covreg column so that the conclusion from the table is somewhat vague additional efforts for illustrating the effect of the variance and the covariance terms will make the paper more persuasive the difference from barlow twins needs to be elaborated in detail otherwise the proposed method is conceived as a minor improvement over barlow twins i found that the definition of the covariance term is meaningfully different from that of barlow twins but it is not emphasized 1 faghri fartash et al vse improving visualsemantic embeddings with hard negatives arxiv preprint arxiv170705612 2017 i vote to reject because the contributions of the paper are not well demonstrated in the paper ### Summary:
this paper presents a selfsupervised learning method for the multimodal setting where each modality has its own feature extraction mapping and i the extracted features shall be close for paired data ii in the feature space each view has close to diagonal covariance while iii the scale for each feature dimension is constrained away from zero to avoid trivial features the presentation is clear and the reviewers do not have major confusion on the methodology there have been some discussions between the authors and reviewers and most questions on the empirical study have been addressed by the authors with additional experiments the remaining concern is on the novelty difference from prior ssl methods especially barlowtwins and significance i think that while it is relatively straightforward to extend methods like barlowtwins to the multimodal setting i do see the value of empirically demonstrating the effectiveness of an alternative loss to the currently pervasive contrastive learning paradigm and hence the paper is worth discussion in my opinion in the end the method resembles classical multimodal methods like canonical correlation analysis in terms of the objective matching paired data in latent space and constraints uncorrelated feature in each view and unitscale constraint for each feature dimension such connections shall be discussed
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a method to scalably compute wasserstein2 barycenters given samples from input measures in general the authors also allow for continuous measure settings inspired by li et al 2020 the paper uses a potentialbased approach and recovers the barycenter by using gradients of the potentials as pushforward maps in general i feel this paper is wellwritten and provides a fast solution to a meaningful problem thereby supporting the claim of novelty the theoretical developments in the paper are reasonable and the experiments carried out are quite decent both in simulation and realdata settings the only point that bothers me is the approximation used it would be great if the authors could give an extensive and detailed understanding of settings where the upper bound in eq10 in the main text is small thereby leading to a good approximation docsepthis work introduces a new wasserstein2 barycenter computation method the authors first derive the dual formulation of the wasserstein2 barycenter problem and then parametrize the convex potentials by icnns the congruent and conjugacy conditions are enforced by regularization terms respectively they then show that the algorithm can find a good barycenter if the objective function is properly minimized pros 1 the algorithm does not introduce bias 2 the algorithm does not require minimax which is efficient 3 the empirical performance is much better than existing methods probably due to the above two reasons areas to improve 1 it is good that the empirical analysis include how the performance change wrt d it would be better if there is a similar analysis to n furthermore since 2n icnns are needed to be trained it would be better if the training time is also reported so that we can have a more comprehensive understanding of the method will there be a setting that discrete method can be faster than the proposed method to enforce comparable approximation error say large n for 3d applications 2 since the congruent and conjugacy conditions are enforced by regularizations they are not guaranteed to be satisfied therefore it would be better if there is an experiment showing that how the conditions are satisfied 3 the first section of related work should also briefly include httpsarxivorgabs160508527 and httpsarxivorgabs190500158 after rebuttal the additional experiment results provided in the rebuttal stage suggests the efficiency of the proposed method as well as the congruent and conjugacy conditions are approximately satisfied i therefore believe this paper should be accepteddocsepsummary the paper considers the wasserstein barycenter problems in the continuous setting in particular the authors propose an algorithm to compute the wasserstein2 barycenter when only samples from the marginals are accessible some theoretical analysis of this method is presented several numerical examples are carried out to compare this method with two other recently proposed methods reasons for score the proposed algorithm utilizes an interesting regularization of the dual formulation of wasserstein2 barycenter resulting in a single minimization problem instead of a minmax problem this algorithm is properly justified by theoretical results as well as numerical experiments pros 1 the paper provides theoretical results on the consistency of the proposed algorithm 2 the experiments are overall good and clear 2 the paper is wellwritten and easy to follow cons 1 highdimensional examples other than the simple gaussian setting are missing 2 there is no analysis of computational complexity of the proposed algorithm also the training expense is not reported 3 the double gradient in the second regularization term could be expensive to evaluate questions 1 in both theorem 41 and 42 the smoothness of the potentials is crucial if the smoothness b is too large then the bound presented is essentially useless please comment on itdocsepsummary the paper derives the barycenter mapping problem as an optimization over congruent convex functionseach convex potential corresponding to a component distribution congruency is a property on the set of optimal potential functions that ties them together however this optimization is quite challenging and so the paper derives an principled objective function that includes two regularization terms the first regularization term encourages congruency of the set of convex functions and can be seen as a variational bound on an ideal congruency regularization the second regularization term encourages the pairs of convex functions to be conjugate the paper proves that the optimal solution of this objective is the true potentials and thus no bias is introduced the proposed approach is demonstrated on the tasks of generative modeling 2256 dimensions posterior inference and color pallete barycenters 3d strengths nice problem formulation and setup with respect to prior methods the derivation of the final objective function is clearly laid out and wellmotivated each problem that is encountered is explained and then a solution or approximation is introduced the theoretical results give appropriate grounding for the approach the empirical results outperform prior potentialbased methods for barycenters weaknesses it is unclear if this method can scale in terms of samples and dimensions what is the computational cost of estimating these input convex neural networks can you provide approximate wallclock times for the various methods and dimensionalities what are the key computational bottlenecks either memorywise or computationwise the experiments seem small scale with a max dimension of 256 barycenters for high dimensional realworld data eg even mnist 784d or some other highdimensional realworld dataset would improve the paper the paper lacks comparison to methods that do not recover 2n potential functions what are the closest methods for barycenter that do not use potential functions for example could the algorithms be compared to discretized barycenter algorithms to show the breakdown in higher dimensions other comments or questions is d in equation 5 supposed to be n some typos above eqn 12 update after author response i appreciated the authors response to the scalability and raw computation times thank you also for the additional comparison to a nonpotential function method this will be a good comparison my main concerns were answered and i still think this is a good paper ### Summary:
the authors propose the 2wasserstein barycenter problem between measures the authors propose a novel formulation that leverages a condition congruence that the optimal transport monge maps here parameterized as potentials must obey at optimality the introduce various regularizers to encourage that property the idea is demonstrated on convincing synthetic experiments and on a simple color transfer problem although experiments are a bit limited i do believe and follow here the opinion of all reviewers that there is novelty in this approach and that this paper is a worthy addition to the recent line of work trying to leverage icnnsbreniers theorem to solve ot problems
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper explores the task of finding discrete adversarial examples for current dialog models in a post hoc manner ie once models are trained in particular the authors propose an optimization procedure for crafting inputs utterances that trigger trained dialog models to respond in an egregious manner this line of research is interesting as it relates to realworld problems that our models face before they can be safely deployed the paper is easy to read nicely written and the proposed optimization method seems reasonable the study also seems clear and the results are fairly robust across three datasets it was also interesting to study datasets which a priori seem like they would not contain much egregious content eg ubuntu help desk conversations my main question is that after reading the paper im not sure that one has an answer to the question that the authors set out to answer in particular are our current seq2seq models for dialogs prone to generating egregious responses on one hand it seems like models can assign higherthanaverage probability to egregious responses on the other it is unclear what this means for example it seems like the possibility that such a model outputs such an answer in a conversation might still be very small quantifying this would be worthwhile further one would imagine that a complete dialog system pipeline would contain a collection of different models including a seq2seq model but also others in that context is it clear that its the role of the seq2seq model to limit egregious responses a related aspect is that it would have been interesting to explore a bit more the reasons that cause the generation of such egregious responses it is unclear how representative is the example that is detailed i will kill you in section 53 are other examples using words in other contexts also it seems reasonable that if one wants to avoid such answers countermeasures eg in designing the loss or in adding common sense knowledge have to be considered other comments i am not sure of the value of section 3 in particular it seems like the presentation of the paper would be as effective if this section was summarized in a short paragraph and perhaps detailed in an appendix section 31 continuous relaxation of the input embedding what does that mean since the embedding already lives in continuous space i understand that your study only considers when optimizing for egregious responses dialogs that are 1turn long i wonder if you could increase hit rates by crafting multiple inputs at once in section 43 you fix g size of the word search space to 100 have you tried different values do you know if larger gs could have an impact of reported hit metrics in table 3 results from the first column normal ogreedy seem interesting wouldnt one expect that the model can actually generate almost all normal responses your results indicate that for ubuntu models can only generate between 65 and 82 of actual test responses do you know what in the ubuntu corpus leads to such a result in section 53 you seem to say that the lack of diversity of greedydecoded sentences is related to the low performance of the ogreedy metric could this result simply be explained because the model is unlikely to generate sentences that it has never seen before you could try changing the temperature of the decoding distribution that should improve diversity and you could then check whether or not that also increases the hit rate of the ogreedy metric perhaps tailoring the mal lists to each specific dataset would make sense i understand that there is already some differences in between the mal lists of the different datasets but perhaps building the lists with a particular dataset in mind would yield better results docsepmain contribution devising and evaluating an algorithm to find inputs that trigger arbitrary egregious outputs i will kill you in vanilla sequencetosequence models as a whitebox attack on nlg models clarity the paper is overall clear i found some of the appendices esp b and c to be important for understanding the paper and believe these should be in the main paper moving parts of appendix a in the main text would also add to the clarity originality the work looks original it is an extension of previous attacks on seq2seq models such as the targetedkeywordattack from cheng et al 2018 in which the model is made to produce a keyword chosen by the attacker significance of contribution the lack of control over the outputs of seq2seq is a major roadblock towards their broader adoption the authors propose two algorithms for trying to find inputs creating given outputs a simple one relying on continuous optimization this is shown not to work breaking when projecting back into words and another based relying on discrete optimization the authors found that the task is hard when using greedy decoding but often doable using sampled decoding note that in this case the model will generate a different output every time my takeaways are that the task is hard and the results highlight that vanilla seq2seq models are pretty hard to manipulate however it is interesting to see that with sampling models may sometimes be tricked into producing really bad outputs this whitebox attack applicable to any chatbot as the authors noted an egregious output for one application go to hell for customer service may not be egregious for another one go to hell in mt overall the authors ask an interesting question how easy is it to craft an input for a seq2seq model that will make it produce a very bad output the work is novel several algorithms are introduced to try to solve the problem and a comprehensive analysis of the results is presented the attack is still of limited practicality but this paper feels like a nice step towards more natural adversarial attacks in nlg one last thing the title seems a bit misleading the work is not about detecting egregious outputsdocsep positive aspects of this submission this submission explores a very interesting problem that is often overlooked in sequencetosequence models research the methodology in sections 4 and 5 is very thorough and useful good comparison of lasth with attention representations which gives good insight about the robustness of each architecture against adversarial attacks criticism in section 3 even if the l1 projection experiments seem to show that generating egregious outputs with greedy decoding is very unlikely it doesnt definitely prove so it could be that your discrete optimization algorithm is suboptimal especially given that other works on adversarial attacks for seq2seq models use different methods such as gradient regularization cheng et al 2018 similarly the bruteforce results on a simplified task in appendix b are useful but its hard to tell whether the conclusions of this experiment can be extrapolated to the original dialog task given that you also study ogreedyhit in more detail with a different algorithm in sections 4 and 5 i would consider removing section 3 or moving it to the appendix for consistency ### Summary:
this work examines how to craft adversarial examples that will lead trained seq2seq models to generate undesired outputs here defined as assigning higherthanaverage probability to undesired outputs making a model safe for deployment is an important unsolved problem and this work is looking at it from an interesting angle and all reviewers agree that the paper is clear wellpresented and offering useful observations while the paper does not provide ways to fix the problem of egregious outputs being probable as pointed out by reviewers it is still a valuable study of the behavior of trained models and an interesting way to probe them that would likely be of high interest to many people at iclr
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a new machine learning method for classification called fuzzy learning machine the paper draws from concepts from cognitive science to derive a method based on fuzzy similarity relations of examples on the input space the training method learns a similarity function and selects a set of exemplars from each category used during the prediction phase to compute the similarity of new examples to the exemplars in each category and then assign it to the category with more similar examples strengths the method proposed is interesting and brings up a number of novelty elements the method seems to improve significantly in relation to existing classification methods on a large number of data sets weaknesses the paper makes a lot of assertions about human cognition that are questionable for instance in essence the process of classification is the process of concept cognition concept contains our knowledge about the world and we use concept to understand and organize the world without it there will be no human intelligence at all similarity plays a crucial role in the process of human classification concept is represented based on similarity for children which is also a basic choice for adults also sometimes it is difficult to understand if the paper makes assertions about its own definitions or about human cognition as in the intrinsic property of concept is just the fuzziness rather than the randomness i do not see a problem in using assumptions based on cognitive science for building models in fact most models in ai do that somehow however care should be taken to not state these assumptions in the paper as settled truths i rather see the paper provide in advance a list of theories hypotheses and assumptions considered along with references for them and then describe the model proposed using them as a basis finally without details on how well the other methods used for comparison were adjusted it is hard to know if the comparison is fair i do not see any limitations or potential negative social impact of this work docsepin the paper fuzzy learning machine the authors propose an approach to learn a classifier via a neural network forming a fuzzy equivalence relation deriving the approach from fuzzy set theory the authors find their approach to perform particularly well across a number of datasets comparing the approach to various other classifiers weaknesses the idea of employing fuzzy set theory for classification tasks is not new at all and i am wondering what is now the methodological novelty of the approach in general the idea of comparing instances data points according to their similarity is the basic idea behind learners using kernel functions where the shape of a concept is specified via the respective kernel however there is also a relatively large corpus of literature on classifiers leveraging fuzzy set theory even working exactly with neural networks and the idea of fuzzy equivalence relations still this related work is neither discussed nor cited in the paper see for example the following references acharya u rajendra et al classification of heart rate data using artificial neural network and fuzzy equivalence relation pattern recognition 361 2003 6168 moser bernhard on representing and generating kernels by fuzzy equivalence relations journal of machine learning research 712 2006 meier andreas and nicolas werro a fuzzy classification model for online customers informatica 312 2007 senge robin and eyke hllermeier topdown induction of fuzzy pattern trees ieee transactions on fuzzy systems 192 2010 241252 kuncheva ludmila fuzzy classifier design vol 49 springer science business media 2000 sun ct and js jang a neurofuzzy classifier and its applications proceedings 1993 second ieee international conference on fuzzy systems ieee 1993 uebele volkmar shigeo abe and mingshong lan a neuralnetworkbased fuzzy classifier ieee transactions on systems man and cybernetics 252 1995 353361 it is unclear to me how this part of the literature is widely ignored by the authors when they seem to come from that area overall the paper has a good structure but could benefit from proofreading especially a vs an is a frequent problem in the text eg a input space a output space a fer then classifier concept and classification process are used without an article some parts also seem overly complicated to me for example consider the proof that a nonlinear model is needed to tackle the derived problem where the instances are concatenated i do not know whether yet another proof for the fact that an xor problem cannot be tackled via a linear model is really needed this could have been simplified furthermore i find that the example given in figure 1 is not very well chosen the concepts cat and dog have crisp biological borders and a human not being able to distinguishing the two categories is rather due to epistemic uncertainty than fuzziness of the concept borders personally i would also argue that non of the three cats is more or less representative of the category or concept cat a claim that was made by the authors is that their approach indeed learns concepts instead of just assignments however there was no proof given in the paper that this is really the case especially there is no presentation or demonstration of any particular concepts that were induced by fitting their model i would even argue that from figure 3 is rather becomes clear that it is learning not really any concepts as the fsr matrix shows more or less the same color for every cell not being on the main diagonal if it was to learn real concepts i would also expect that a 0 would receive a lower membership score for the concept 1 than a 7 for example a better overall performance is no proof for the claim that the method learns concepts another branch of classification literature also tries to capture concepts for classification purposes analogy learning bayoudh sabri laurent miclet and arnaud delhay learning by analogy a classification rule for binary and nominal data ijcai 2007 strengths since most people in the machine learning community will not be that much familiar with fuzzy set theory i liked it very much that all fundamental definitions were provided by the authors in the paper or supplementary material to make it self sufficient according to the experiments the proposed method seems to perform very strong compared to a set of almost 200 classifiers however the way how the rankings were calculated is a little bit odd why are 65 learners sharing rank 1 with 100 accuracy receive a rank of 65 this will most likely also affect the average rank statistics compared for the ten classifiers later on i would rather expect that performances with a tie receive the same higher rank leaving free the next n1 spots in the ranking limitations except for runtime complexity to compute the fer matrix are not really discussed when does the approach fail and why does it fail docsepthis paper proposes a new learning machine for the general classification problem which is one of the most important problems in mlai the new learning machine is based on the concept cognition theory in cognitive science and fuzzy set theory in mathematics science so its working mechanism is highly explainable and has a solid theoretical guarantee meanwhile a large number of systematic experimental results demonstrate the superiority of the proposed method the manuscript focuses on the classification problem which is one of the most important problems in mlai the manuscript reexamines the classification from the perspective of concept cognition and reveals the essence of classification and the manuscript provides a new view to interpret the structure of the classification problem by establishing the equivalence between binary classification problem and classification problem by employing equivalence relation in set theory furthermore the manuscript realizes that fuzziness of concept is the main source of uncertainty in classification and then employs the fuzzy set theory to model this kind of uncertainty based on the above conclusions the classification problem is modeled as a fuzzy equivalence relation problem which well preserves the nature and intrinsic fuzziness of the classification problem whats more the manuscript designs a clever model and loss function to approximate the fuzzy equivalence relation effectively and efficiently therefore in this manuscript the main proposals have the theoretical basis of cognitive science and the key conclusions are proved mathematically and extensive experiments compared with 179 methods on 121 data sets verify the rationality and superiority of the proposed method overall the manuscript is clearly written and well organized with good clarity to enhance the readability and completeness it is suggested that some contents in the appendix should be moved to the corresponding part of the main manuscript for example the analysis of the working mechanism of the existing classifiers should be moved to the introduction of the main manuscript however in the current manuscript these contents are placed in appendix a2 na ### Summary:
the paper proposes an approach for the design of neural networks for classification based on fuzzy theory and a specific implementation is presented and experimentally assessed arguments from cognition to justify the proposed approach are also used although at the level of inspiration the lack of reference to fuzzy systems based neural networks models in the relevant literature in the initial version of the paper has been solved in the revised version and authors rebuttal seems to have clarified most of the issues raised by reviewers the experimental assessment seems to be robust personally i find the jargon used in the paper a bit unfit for neurips standards however i do not think this should be a valid reason for rejecting a paper for which no serious drawback has emerged in any case i think it is good for neurips to diversify the range of approaches and methodologies covered by the scientific program
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper shows that significant speedup gains can be achieved by using klregularization with information asymmetry in sparsereward settings different from previous works the policy and default policy are learned simultaneously furthermore it demonstrates that the default policy can be used to perform transfer learning pros overall the paper is wellwritten and the organization is easy to follow the approach is novel and most relevant works are compared and contrasted the intuitions provided nicely complements the concepts and experiments are thorough cons the idea of separating policy and default policy seems similar to having high and low level controller hlc and llc in hierarchical control where llc takes proprioceptive observations as input and hlc handles task specific goals in contrast one advantage of the proposed method in this work is that the training is endtoend would have liked to see comparison between the proposed method and hierarchical control as mentioned the proposed method does not offer significant speedup in densereward settings considering that most of the tasks experimented in the paper can leverage dense shaping to achieve speedup over sparse rewards itd be nice to have experiments to show that for some environments the proposed method can outperform baseline methods even in densereward settings docsepthis is a very interesting piece of work we know from cognitive science literature that there are 2 distinct modes of decision making habit based and topdown control goal directed decision making the paper proposes to use this intuition by using information theoretic objective such that the agent follows default policy on average and agent gets penalized for changing its default behaviour and the idea is to minimize this cost on average across states the paper is very well written i think this paper would have good impact in coming up with new learning algorithms which are inspired from cognitive science literature as well as mathematically grounded but i dont think paper in its current form is suitable for publication there are several reasons but most important 1 most of the experiments in this paper use of the order of 109 or even 1010 steps its practically not possible for anyone in academia to have such a compute now that said i do think this paper is pretty interesting hence is it possible to construct a toy problem which has similar characteristics and then show similar results using like 106 or 107 steps i think it would be easy to construct a 2d pompd maze navigation env and test similar results this would improve the paper as well as could provide a baseline which people in the future can compare to 2 it becomes more important to compare to stronger baselines like maximum entropy rl for ex soft actor critic and spend some good of amount time getting these baselines right on these new environments docsep originality this paper studies how to use klregularization with information asymmetry to speed up and improve reinforcement learning rl compared with existing work the major novelty in the proposed algorithm is that it uses a default policy learned from data rather than a fixed default policy moreover the proposed algorithm also limits the amount of information the default policy receives ie there is an information asymmetry between the agent policy and the default policy in many applications the default policy is purposely chosen to be goal agnostic and hence conducts the transfer learning to the best of my knowledge this informationally asymmetric klregularization approach is novel clarify the paper is well written in general and is easy to follow significance i think the idea of regularizing rl via an informationally asymmetric default policy is interesting it might be an efficient way to do transfer learning generalization in some rl applications this paper has also done extensive and rigorous experiments some experiment results are thoughtprovoking pros and cons pros 1 the idea of regularizing rl via an informationally asymmetric default policy is interesting to the best of my knowledge this informationally asymmetric klregularization approach is novel 2 the experiment results are extensive rigorous and thoughtprovoking cons 1 my understanding is that this informationally asymmetric klregularization approach is a general approach and can be combined with many policy learning algorithms it is not completely clear to me why the authors choose to combine it with an actorcritic approach see algorithm 1 why not combine it with other policy learning algorithms please explain 2 this paper does not have any theoretical results i fully understand that it is highly nontrivial or even impossible to analyze the proposed algorithm in the general case however i recommend the authors to analyze possibly a variant of the proposed algorithm in a simplified setting eg the network has only one layer or even is linear to further strengthen the results 3 the experiment results of this paper are interesting but i think the authors can do a better job of intuitively explaining the experiment results for instance the experiment results show that when the reward is dense shaping the proposed method and the baseline perform similarly might the authors provide an intuitive explanation for this observation i recommend the authors to try to provide intuitive explanation for all such interesting observations in the paper ### Summary:
strengths the paper introduces a promising and novel idea ie regularizing rl via an informationally asymmetric default policy the paper is well written it has solid and extensive experimental results weaknesses there is a lack of benefit on densereward problems as a limitation which the authors further acknowledge as a limitation there also some similarities to hrl approaches a lack of theoretical results is also suggested to be fair the paper makes a number of connections with various bits of theory although it perhaps does not directly result in any new theoretical analysis a concern of one reviewer is the need for extensive compute and making comparisons to stronger maxent baselines the authors provide a convincing reply on these issues points of contention while the scores are nonuniform 775 the most critical review r15 is in fact quite positive on many aspects of the paper ie this paper would have good impact in coming up with new learning algorithms which are inspired from cognitive science literature as well as mathematically grounded the specific critiques of r1 were covered in detail by the authors overall the paper presents a novel and fairly intuitive idea with very solid experimental results while the methods has theoretical results the results themselves are more experimental than theoretic the reviewers are largely enthused about the paper the ac recommends acceptance as a poster
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper analyzes the global minima of deep linear networks with weight decay under the assumption of linear architecture l2 regularization and population risk the paper takes advantage of the symmetry and invariance in the network and derives the analytical expression of the minimum points depending on the regularization strength for twolayer networks zero is either the global minimum or a saddle point for deeper networks zero is always a local minimum and can be global see figure 1 the paper also tries to connect these theoretical results with some phenomena in deep nonlinear networks strengths 1 the paper proposes a set of assumptions under which the analytical expression of the minimum points can be derived and the corresponding properties can be analyzed 2 i think the most interesting contribution of this paper is to point out that the weight decay ie the l2 regularization may introduce a local minimum at zero for deep networks it is not surprising since the regularization term in eqn 1 is quadratic while l0 has higher order it is good to formulate the phenomenon as a rigorous theory under the assumptions 3 the presentation of the paper is pretty good the settings are clearly stated and the results are supported by rigorous proofs there are also detailed comments and discussions about the meaning and possible implications of the theoretical results weaknesses my main concern is that the assumptions in this paper may over simplified the problem the proofs are straightforward though i believe theorem 2 is not trivial and heavily depend on the symmetry thanks to the assumptions that does not hold in general cases if considering weaker assumptions i guess we may still prove that zero is a local minimum since the regularization is quadratic and the square loss has higher order but the quantitative results in this paper may not be extended since the implications in section 5 are all under weaker assumptions the theoretical results may not support the discussions here strongly enough in addition i guess a resnet architecture may avoid the local minimum at zero since the square loss is not in high order now please see the weaknesses section above docsepthis work studies the population loss landscape of stochastic eg in the sense of dropout deep linear neural networks under 2norm weight regularization the key contribution of this work is the derivation of analytical expressions for the global minima of the aforementioned loss landscape at least up to a scalar quantity the implications of this result for training both linear and nonlinear neural networks are discussed in particular this result illustrates how weight decay and depth can lead to a more challenging optimization problems as well as the importance of the role of network initialization in avoiding basins of attraction around bad minima originality this paper continues the line of work analyzing linear neural networks i am not a specialist concerning the study of linear networks but the results appear at least to the best of my knowledge novel and interesting quality and clarity on the whole i think the paper is well organized well written and clear a few very minor suggestions in regard to the papers presentation are as follows i think upfront you could state your network architecture forward pass function more clearly in terms of matrix vector product and give the dimensions of each of your parameter matrices notation wise both scalars and vectors use lower case characters which can be a bit confusing perhaps using bold lower case characters for vectors might help it might be helpful for the reader to restate statements of lemmas and theorems in the supplementary so they dont have to flick between line 475 in the supplementary we see that the left hand side textitis larger significance i think the extent to which understanding linear networks is important for understand nonlinear networks is not entirely clear i still think it is important that we understand deep linear networks regardless however and this work seems like a useful contribution i think the authors are reasonably upfront about the limitations of their work although i think they could perhaps add some suggested avenues for future works i cant envisage how this work might have a negative societal impact docsepthis paper provides a closedform solution up to some constant of the global minima of linear neural networks when trained using square loss and strictly positive weight decay this result can be extended to the case when the neurons are stochastic and independent the formulas for the global minima are directly or potentially related to the weight decay depth stochasticity of neurons and signal strength from training data the authors also used the characterizations of global minima to explain multiple phenomena happening in real neural network training eg deeper networks are harder to optimize this paper also provided variance analysis in the asymptotic limits of network hyperparameters and did smallscale experiments on synthetic data to validate their theoretical results in nonlinear networks strengths this paper theoretically gives an analytical formula up to some constant for the global minima of linear neural networks trained with weight decay and this formula works for deep linear networks and could be generalized to independent stochastic neurons these analytical expressions provide opportunities to study the properties of these global minima of deep linear networks in detail it is an interesting idea to connect the formula of the global minima of neural networks to various common phenomena in this field eg the collapses in deep learning this paper is generally wellwritten and wellstructured the notations used in this paper are mostly welldefined and the intuitions and implications of the theoretical results are provided in the main text these make this paper easy to understand the theoretical proofs in this paper appear to be correct and the related works are adequately cited weaknesses the theoretical results in this paper are all about linear networks and the relationship between linear networks are nonlinear ones seems somewhat unclear so most of the conclusions in this paper might not translate directly to nonlinear settings this is my major concern about this paper the authors claimed on line 16 that the landscapes of linear networks are believed to well approximate that of nonlinear ones but this claim might be vague and need further explanation the authors also did experiments with a small twolayer nonlinear network on synthetic data but the scale of this experiment is small so it is unclear whether this result still holds for more general settings it would be better if the authors could provide more theoretical or empirical evidence connecting the loss landscape of linear neural networks and those of nonlinear ones the characterizations of global minima might not be enough to characterize the training of neural networks which depends on the properties of the entire loss landscape for instance it is possible that the weights of neural networks could diverge and it never reaches a minimum and it is also possible that the weights converge to a bad local minimum or saddle point it might be better if the authors could theoretically or empirically eliminate these possibilities and show that the network weights will always converge to the points that they characterized ie either the global minima or the bad local minimum at 0 the proof techniques used in this paper seem to heavily rely on the existence of weight decay at all layers making it hard to be generalized to other settings without weight decay relationships like equation 13 will break and the characterizations of the local and global minima could become much more complicated some arguments made in this paper might be somewhat vague for example in line 268 it might be unclear what the authors mean by cannot learn the data minor comments the details of the experiments the authors did to produce figure 2 are missing these details eg how the data are generated and what the hyperparameters are could be important for interpreting the experimental results the notation v on the lefthand side of equation 8 seems undefined should it be defined as some term in equation 5 typos line 243 are global and cannot generalize are global cannot generalize line 269 twolayer net and the existence twolayer net the existence update after author response i have read all other reviews and the authors responses and i decided to increase my score by 2 there are two main reasons why i increase my score 1 the authors have added empirical evidence eg resnet on cifar to further relate the loss landscape of linear neural networks to nonlinear ones 2 the proof framework in this paper can be extended to more general settings with similar results and the authors have provided theoretical results in more general settings especially when there is no weight decay the authors stated the assumptions for their theoretical results in the paper and had many discussions about the implications it might be better if the authors could discuss more explicitly the limitations of this paper in the implication and conclusion sections this paper is mostly theoretical and focuses on a fundamental problem in general neural network training and i do not see any immediate negative societal impact of this work docsepthis paper studies deep linear neural networks with weight decay and stochastic neurons the authors show that the analytical global minima of square loss can be found for shallow neural networks thm1 and deep neural networks thm2 the analysis has some implications on the role of weight decay and the depth of neural networks strengths i like the setting in this paper which is clean and simple but can manifest interesting properties of neural networks the results are very interesting especially the part where bad minima emerge with weight decay weakness 1 i understand the difficulty of analyzing global minima hence some assumptions are needed eg diagonal a0 for the exact form of b for the shallow neural networks and single data but some results are not easy to interpret eg thm2 prop 3 maybe the authors can provide more intuitions 2 i have a minor concern that in the main contributions point 4 seems irrelevant to this paper whats v in eq8 yes ### Summary:
there is a clear consensus amongst the reviewers that the manuscript advances the theory for linear deep networks to a degree warranting acceptance at neurips the authors responded well to the issues raised by the reviewers which results in increased support by the reviewers that the manuscript be accepted inclusion of weight decay stochasticity and architectures beyond feed forward networks make this a valuable addition to the theory of linear deep networks
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents new methods for inference and sampling for archimax copulas archimax copulas are a family of copulas defined through an archimedian generator and a stable tail dependence function stdf in order to discuss the inference and sampling for archimax copulas the authors first proposed inferential and sampling methods for archimedian generator and stdf then combining these methods the methods for inference and sampling for archimax copulas are established in experiments it is seen that the proposed inferential methods for archimedian generator and stdf show satisfactory performance other experiments are given to illustrate that archimax copulas outperform or work as well as some existing models for a couple of real datasets strengths a originality the presented inferential and sampling methods for archimax copulas seem new these methods are derived mainly by combining existing methods for archimedian generators and stdfs b quality the paper seems technically sound comprehensive experiments are given to assess the performance of the proposed inferential methods and compare submodels of archimax copulas with some existing models c clarity the paper is clearly written in general section 2 providing the background of the presented theory would be helpful for readers who are not family with copulas d significance archimax copulas are flexible models which include archimedian copulas and extremevalue copulas as special cases therefore the proposed inferential and sampling methods for archimax copulas could be useful in practice when flexible modelling is required weaknesses e quality apart from archimax copulas there exist other flexible families of copulas such skewt copulas see eg joe 49 section 3172 the paper does not sufficiently compare the archimax copulas with those existing copulas f significance i am not sure about the popularity of archimax copulas and the importance of the related theory this paper presents nonetheless i appreciate the results of experiments which suggest the usefulness of the archimax copulas the authors have addressed the limitations and potential negative societal impacts of their work in section 5 depending on the authors response to my questions g and i i might claim that the usefulness of archimax copulas is limited docsepthe authors propose scalable estimation and sampling procedures for archimax copulas on simulated and real data they demonstrate that archimax copulas fit using their procedure can model complex data with dependencies between extreme values accurately in comparison to existing deep generative methods the paper bridges two important areas of research copulas and deep generative modeling it is highly original the first method of its kind and technically excellent it is also potentially very significant in its impact modeling rare events is critical to managing risk in real world applications and relying naively on modern deep generative approaches can potentially be very problematic the paper is overall quite clear though there are two areas where i struggled first an intuitive explanation as to why archimax copulas are good for modeling dependencies between extreme events would be helpful to the reader second id appreciate a concise statement of the complete model up front explaining how the deep generative model determines the stdf the authors clearly explain some of the methods important limitations docsepthis paper proposes novel procedures for both inference and sampling of archimax copulas this family of copulas is important due to their ability to represent the bulk and tail of distributions simultaneously which can be suitable for healthcare and hydrology realworld data the authors propose a hybrid approach mixing copulas and neural networks to allow for more flexible estimation in experiments the proposed method is compared to sota density modeling techniques and the results suggest that their method can extrapolate to tails and scale to higher dimensions strengths originality to the best of my knowledge this is the first work to address flexible density estimation of bulk and tail distribution with archimax copulas quality the authors put a lot of effort into including a high level of technical details and experiments in the paper and appendix clarity since copulas are not a straightforward tool in the machine learning community i appreciate the background overview and related work mentioned throughout the paper and supplementary significance considering the tails not only bulk of the data is overlooked problem and can be very significant in many realworld applications weaknesses originality although the presented methodology is novel it does still build on existing work regarding archimax copulas quality i would expect more challengingmotivating experimental results to support the claims of significance and contribution from the introduction clarity a running example or one real data motivating example could improveclarify why archimax copulas are an appropriatenecessary tool in critical scenarios this can help bring the paper closer to the ml community make it more relevant for readers significance since copulas are still not widespread at the ml conferences i feel like additional motivation for such papers is needed either to showcase superiority on largescale realworld datasets or find some new tasks where sota models fail i also appreciate the code submission and effort of implementing everything in python rare for statistics methods the authors have addressed the limitations and potential societal negative impact of their work docsepthe authors propose an efficient inference and sampling schemes for archimax copulae based on learning of the generator and stable tail dependence functions through deep learning techniques originality the work is original to my knowledge the authors propose a new method to infer and sample from archimax copulas extending the previous work in this area clarity the authors go a long way to make sure the paper is accessible by a larger machine learning community they provide necessary background info on copulae and their use in machine learning they also provide extensive derivations in the appendix that said the material itself is quite dense i have not found typos one minor thing is that figure 1 does not seem to be referenced anywhere in the text quality the authors build upon a previously developed theory and methods to derive a scheme to infer archimax copulae via the means of deep learning models which also allows for an easy sampling from the authors provide an analysis of performance of the proposed method comparing it to other stateoftheart methods the details of the experiment setups are given in detail in the supplementary material the authors provide extensive backgroundrelated work review in both the main text and the appendix significance inference of the multivariate distributions from the data is a core statisticalmachine learning problem the authors propose a way to infer multivariate dependencies via archimax copulae representations of which are learned via deep learning models the method also allows for sampling from the learned distribution the method serves both the bulk and the tails of a distribution with that said a proposed method is of major significance for the field the authors faithfully address limitations and possible negative impact of the proposed method ### Summary:
the paper proposes a new method for inference and for sampling in archimax copulas all the reviewers praised the soundess and clarity of the paper the novetly of the ideas and the experimental results copulas might not be one of the core topics of the neurips community but the reviewers pointed out that 1 the authors did a great job at explaining copulas to the ml community a valuable tool to model extreme events 2 the method builds a connection between copula and deep generative modeling and hence opens new research directions hence they all enthusiastically recommend to accept the paper and i agree with them some of the reviewers hcqw 5yjr also supported the idea to highlight the paper oral or spotlight presentation
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies square loss in a realizable timeseries framework the main result shows that whenever a trajectory hypercontractivity condition holds the risk of least squares estimator on dependent data matches the iid rate orderwise after a burnin time the paper formulates a phenomenon called learning with little mixing and presents several examples where such phenomenon occurs this paper gives solid theoretical results on learning with dependent data it shows on a broad class of examples the lse applied to timeseries model behaves as if all samples are independent given enough data although i am not familiar with the background of this problem the results look insightful on the other hand id also be curious to see if the theory can be testified empirically on simple regression problems there is no negative societal impact docsepthe authors study the problem of learning from dependent data over time with the aim of obtaining empirical risk minimization bounds that do not depend on the mixing time of the process they consider a timeseries framework with martingale difference noise and prove a general result under an assumption they introduce called trajectory hypercontractivity and sublinear growth in the dependency matrix the risk of the leastsquares estimator matches the iid rate after a burnin time note the burnin time can depend on the mixing this is in contrast to naive bounds where the effective sample size is deflated by a factor of the mixing time the proof relies on using the hypercontractivity to control the lower tail of sums involving the dependent random variables the authors specialize the result both to nonparametric function classes and those with logarithmic metric entropy they give several examples where their conditions are satisfied and which recover or generalize previous results finitestate markov chains bounded function classes for which l2l2epsilon norms are equivalent and infinitedimensional function classes based on subsets of ell2mathbb n ellipsoids eg functions of bounded norm in a rkhs the strength of the paper lies in the fact that it gives a very general result that unites previous results under a general framework eg results on learning linear dynamical systems and finite markov chains there is a large degree of quantitative flexibility in the assumption that the authors introduce trajectory hypercontractivity which interpolates between boundedness and smallball behavior the proofs in the appendix are easy to follow however the main body of the paper is technically dense and not easy to digest the examples are fairly abstract it would help the exposition significantly to expand on concrete instantiations of the theorem moving more techical commentary to the appendix as necessary for example writing out the theorem for linear dynamical systems obtained from the general theorem there are also some limitations to the theorem see below certain known results are not covered by the framework in particular learning linear dynamical systems that are marginally stable or which have unbounded noise additionally as the max eigenvalue approaches 1 the necessary burnin time given by the theorem blows up whereas known results do not have this dependence this stems from reliance on rate of growth of the dependency matrixwhile the asymptotic rates do not depend on the mixing the burnin time does the authors discuss this in section 43 docsepthe paper shows that for mixing systems under an easiness condition the rate of convergence of the lse for rather general hypothesis classes has iid data like performance the paper proves excess risk bounds of lse with dependent data however the results arent very surprising and contributions are a bit too incremental for me na docsepthe authors investigate the square loss in a realizable timeseries framework with martingale difference noise which is an interesting topic in machine learning with noniid data their main result is a fast rate excess risk bound which shows that whenever a trajectory hypercontractivity condition holds the risk of the leastsquare estimator on dependent data matches the iid rate orderwise after a burnin time moreover the authors give some examples of when the condition holds i find the main context easy to follow strength 1 this paper is technical it is clearly written and well organized 2 the result in this paper is significant weakness 1 this paper requires a more detailed discussion and comparison with the previous related work 2 there are some confusing mistake in the proof of the main results 1 this paper lacks a detailed discussion and comparison with the previous work 2 this paper seemed not to give any new insight on this field ### Summary:
this paper studies the problem of learning under dependent data existing bounds usually work by deflating the effective sample size by a factor that depends on the mixing time essentially when the samples are far enough away from each other depending on the mixing time they can be treated as independent this paper introduces a new framework that they call the trajectory hypercontractivity condition which stipulates that there is sublinear growth in the dependency matrix this is a flexible perspective and the paper derives both general results and applies them in interesting settings there are some weaknesses eg they cannot recover results in the marginally stable case or in settings with unbounded noise for example as the maximum eigenvalue approaches one the burnin blows up i think reviewer snkbs perfunctory review should be ignored the paper is somewhat borderline but in my opinion it is technical stronger and more interesting than some of the other borderline papers in my batch i recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: 1 strength targeting an important problem of fl reducing the communication cost 2 weakness this work simply applies the metalearning method into the federated learning setting i cant see any technical contribution either in the metalearning perspective or the federated perspective the experimental results are not convincing because the data partition is not for federated learning reusing data partition in a metalearning context is unrealistic for a federated learning setting the title is misleading or overclaimed only the adaptation phase costs a few rounds but the communication cost of the metatraining phase is still high the noniid partition is unrealistic the authors simply reuse the dataset partitions used in the metalearning context which is not a real federated setting or in other words the proposed method can only work in the distribution which is similar to the metalearning setting some meta earningrelated benefits are intertwined with reducing communication costs for example the author claimed the proposed method has better generalization ability however this is from the contribution of the metalearning more importantly this property can only be obvious when the data distribution crossclients meet the assumption in the context of metalearning the comparison is unfair to fedavg at least we should let fedavg use the same clients and dataset resources as those used in metatraining and fewrounds adaptation episodic training is a term from metalearning i suggest the authors introduce metalearning and its advantage first in the introduction fewshot flrelated works are not fully covered several recent published knowledge distillationbased fewshot fl should be discussed 3 overall rating i tend to clearly reject this paper because 1 the proposed framework is a simple combination of metalearning and federated learning i cannot see any technical contribution 2 claiming the few round adaptations can reduce communication costs for federated learning is misleading since the metatraining phase is also expensive 3 the data partition is directly borrowed from metalearning which is unrealistic in federated learning after rebuttal the rebuttal does not convince me with evidence thus i keep my overall rating i hope the author can obviously compare the total cost of metalearning phase plus fl finetuning phase with other baselines docsep this paper studied the combination of federated learning tasks in a metalearning setting in particular with the assistance of the pretrained metamodel the new fl models training can be completed within limited communication rounds it was inspired by the metalearning method used in fewshot learning scenario this paper proposed a fewround learning frl algorithm and designed global prototypeassisted learning gpal scheme to assist training it is an interesting topic to combine metalearning with federated learning the weaknesses of this paper are summarized below 1 the proposed method updates metamodel in each client however the metalearning task consumes lots of computation resources and highly relies on the large number of classes these make it hard to train a metamodel in a local client in a federated system although the setting sounds useful it is hard to realize in realworld applications 2 this paper is relevant to two widelyknown fewshot learning methods maml and prototypical network so it is better to consider mamlfl andor protonetfl as baselines to make the proposed methods more convincing and prove the efficacy of the proposed loss functions 3 given the complexity of the proposed algorithm and associated hyperparameters the authors could anonymously release the source code in the reviewing stage more details about the experimental platform used in this paper should be given 4 as illustrated in the experimental setup on page 6 the metapretraining phase needs a large number of communication rounds is it appropriate for the bandwidthlimited or timesensitive applications will this be a distracter in fewround learning scenarios 5 for the 5way setup in table 1 there are 5 classes are randomly sampled from the dataset in each episode which means that all the clients contain all the training classes 64 classes for miniimagenet and 351 classes for tieredimagenet locally this is impractical because most local clients only have limited information to share 6 the representation of trainable parameters in algorithm 1 is a little bit confusing for example theta and phi are actually the same parameters the only difference between them is that theta is updated during local update using the support set while phi is updated during local metaupdate using query set since the algorithm is an important part of this paper the definition and use of these parameters should be much clearer if possible the authors can add a detailed interpretation of these two parameters docsep summary this paper proposes a new paradigm to train federated learning models in particular following the spirit of metalearning for fewshot learning the authors propose to metatrain an initial model so that starting from this point only r eg 3 rounds of fl are needed to produce a satisfying test accuracy pros 1 the authors made significant efforts in designing the metalearning strategy for fewround fl 2 the proposed algorithm has the potential to redefine fl training paradigm but there should be more validations my questions and concerns are stated in the next section cons the major concern i have is about the way they construct the dataset and evaluate the algorithm the training task the authors selected is more like a metalearning standard setting and is not common in federated learning so i doubt its performance in realistic fl settings it would be great if the authors can evaluate their algorithm in a standard fl dataset otherwise it is not convincing 1 when constructing the metalearning datasets for each episode the authors sample several classes from the whole dataset and then simulate 10 clients based on the selected samples however in fl setting this is infeasible as the server cannot access the whole dataset the authors should describe how to construct the metalearning procedure given hundreds or even thousands of clients without accessing their local data for example shakespeare dataset has 715 train clients and 715 test clients how to construct the metalearning procedure from this decentralized data and how the algorithm performs are unclear 2 the scale of fl is relatively small at each episode there is only 10 clients however in practical ondevice fl there can be thousands of clients for training and testing for example in 1 stackoverflow has 342477 training clients and 204088 test clients even emnist dataset has 3400 test clients the performance of the proposed algorithm is unclear in these realistic largescale fl problems 3 the metatrain algorithm require the computation of fullbatch loss at each round which consumes more computational resources than vanilla fedavg the authors are supposed to discuss this additional overhead postrebuttal comments thanks the authors for the response ive read it and other reviewers comments i feel the authors didnt directly answer my questions and just reiterate what they have in the paper unfortunately it is still unclear to me how to perform metatraining on standard fl training tasks for example shakespeare in 1 in this training task therere total 700 clients does that mean in the metatraining phase we need to sample 700 clients for each episode how to construct this metatrain dataset from a standard federated dataset 1 reddi et al adaptive federated optimization 2020docsepthe paper is to train a metamodel in a small number of selected nodes in a federated learning environment and then use the metamodel to assist the federated learning in reducing the communication rounds it is basically a federated version of a prototypical network the proposed method relies on a strong assumption that there is a metatraining environment in federated learning it is not a standard fl setting moreover given the assistance of the metamodel there is no guarantee that the federated learning environment will converge in a fewround the major technique contribution of the proposed method is how to metatrain a global model in a federated setting in particular it adapts the prototypical network to fit the federated setting it is unclear how the proposed method provides any theoretical contribution rather than applied research in the experiment one dataset is not enough to support the effectiveness of the proposed method more federated learningrelated benchmark datasets should be discussed eg femnist shakespeare texts cifar and feceleba in particular the proposed twostage procedure is equivalent to learn a global model in a standard fl setting and then conduct personalized deployment for each device or a specific group of devices therefore in the experiment part the authors need to add more baseline methods for example some personalized federated learning method should be selected as baseline methods the major concern in algorithm 1 lines 16 and 18 are a federated aggregationbased updating and line 24 is a prototypicalbased meta learner updating these two updating methods are inconsistent which are to optimize different objectives and the authors should give an overall loss to unify the updating steps rather than force two kinds of updating into one framework typo metratraining in figure 1 ### Summary:
this paper proposes a metalearning based fewshot federated learning approach to reduce the communication overhead incurred in aggregating model updates the use of metalearning also gives some generalization benefits the reviewers think that the paper has the following main issues see reviews for more details limited technical novelty the paper seems to simply combine metalearning with federated learning not clear whether the communication overhead is actually reduced because the metalearning phase can require significant communication and computation the experimental evaluation in particular the data distribution could have been more realistic i hope that the authors can use the reviewers feedback to improve the paper and resubmit to a future venue
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper studies goalconditioned hierarchical rl gchrl and proposes a new algorithm called hierarchical exploration approach with stable subgoal representation learning hess to improve the stability of subgoal representation learning and strengthen the exploration at high level hess is built on previous method lesson the instability of subgoal representation learning is alleviated by a representation regularization which is utilized to encourage the representation to be stable for the states with relatively lower triplet losses originated from lesson further this paper proposes an active exploration method for the highlevel learning the method is built on the definitions of novelty and potential of states which corresponds to accumulated visit counts of highlevel state trajectory and a negative distance to the perturbed subgoals extensive experiments are conducted in a few mujoco environments with sparse reward demonstrating the superiority of proposed algorithm and the effectiveness of different ingredients strengths i appreciate that this paper studies the subgoal learning instability and highlevel exploration which are of importance to gchrl research the regularization method for representation stability is reasonable simple but empirically effective meanwhile the representation instability is a problem encountered in many other scenarios and the proposed regularization method is general and of potentials to be leveraged in other representation learning problem this paper proposes effective active exploration for highlevel exploration of gchrl to my knowledge the subgoal perturbation along with the definition of potential is new in gchrl i appreciate the combination of novelty and potential which properly takes novelty and reachability into consideration for an effective exploration selection the experiments are extensive well evaluating and demonstrating the characteristics of hess across multiple perspectives nbsp weaknesses i think the methods proposed in this paper are relatively simple and somewhat incremental however thanks to the solid experiments the effectiveness of these methods are demonstrated at a first glance the representation regularization seems to be disconnected to the active exploration method later i found that the stable representation learned is important to the effectiveness of novelty calculation i recommend the authors to make the connection more obvious for a better convey of the story although i think the methods are reasonable in an overall view i have a few concerns on concrete implementations i list my questions and concerns below nbsp my first concern is the calculation of novelty equation 4 i have no question on the maintenance of nphis but for the calculation of accumulated visit count of highlevel state trajectory i wonder given a state si how the trajectory of policy pihier is obtained exactly nbsp second for equation 5 since the potential is defined over the expectation of highlevel transition obtained by pihier with the perturbed subgoal ge how are such transitions obtained nbsp for both above two concerns one possible way is to simulate the rollouts with a world model but this seems not to be the way used in this paper alternatively are these approximated with the trajectories in the replay buffer if so how should consider the offpolicyness and suboptimality nbsp the third question is on the computation complexity the top k selection in representation regularization and the calculation in equation 6 the selection of candidates according to the constraints the calculation of novelty and potentials it seems the computation is heavy for these what are the practical implementations nbsp besides i have a few questions on the experiments how to understand that some baseline algorithms work better in imageversion environments eg hsrhicm on antmaze and lesson on ant push is the sentence so the intrinsic rewards of hicm may vanish before the policies have been learned well checked in the experiments in figure 6 are the same 5 trajectories used for the upper and lower panels at each time point and what are the trajectories exactly since at the beginning of learning the agent fails to reach the final goals according to the results in figure 4 nbsp minors can the authors explain more on the sentence to keep the lowlevel reward function rl stationary while learning phi we concatenate states and state embeddings together as the lowlevel states above equation 1 nbsp i will raise my score if my questions and concerns mentioned above can be well addressed postrebuttal comments some of my concerns and questions are well addressed i raised my evaluation to a borderline acceptation although my main concern on the relatively complex mechanisms involved in the implementation eg hash table iterative sampling and fitering table lookup and so on a few of these are samplewise remains and i think these computation implementation should be noted and described in detail in the revision later however i recognize the authors efforts in pushing the boundary of hrl i vote for a borderline acceptation after discussing with the authors i recognize the authors efforts in pushing the boundary of hrl although i still have some concerns on the complexity of the proposed methods and the practical computation cost docsep1 this paper investigates learning stable subgoals within a deep hierarchical reinforcement learning setup 2 two controllers are learned from the same experience replay buffer the high level controller serves as a meta controller and the low level controller serves as a goalachieving agent the high level controller communicates abstract goals to the low level controllers 3 the high level controller is optimized using an extrinsically specified reward function the low level controller optimizes the intrinsic goal communicated by the high level controller the subgoals are changed after a deterministic time length known option termination 4 the subgoals are designed with the key insight that desirable novel subgoals should be reachable and effectively guide the agent to unexplored areas typically countbased predicted or successor feature based rewards have been used as novelty measures however these fall short in terms of reasoning about reachability of states to handle this a potential measure for subgoals is proposed which regularizes the novelty measure 5 to go to unexplored states a directional goal is synthesizedimagined using the current state and a directional vector the potential function makes sure that this is approximately reachable by formulating reward as the expected negative distance between the ending state and imagined goal this is similar to feudal networks vezhnevets et al but goes beyond it to handle diversity and reachability 6 the approach is validated on a set of hard to explore continuous environments with reasonably strong and relevant baselines 1 i think this paper is interesting and explores a novel set of ideas the baselines also seem reasonable the closest baseline in terms of using directional goal vectors is feudal networks i would have expected to see a head to head comparison with this approach even though this proposed method goes beyond it however the core idea of having a meta controller output goal vectors and then subcontrollers learning to execute them was explored in feudal networks 2 what are the effects of changing the option termination condition currently it is hard coded to be c what are the implications of this do the authors observe any deviations or improvements if this hyper parameter varies it seems like the potential function novelty measure and option termination are deeply interlinked it would have been good to more clearly understand the relationship between these measures 3 figure 4 is the main quantitative figure it seems important to test the effects of stability regularization this is highlighted qualitatively in figure 6 but not shown in figure 4 4 the qualitative analysis on the effects of the interaction between potential and novelty measure is quite sparse it is not clear how it fails and where it works figure 6 is helpful but it needs improvements in terms of clarity and scope other environments 5 figure 5 is truncated at 5 million steps how does the asymptotic performance look like for this method does it plateau sooner than baselines what is the maximum achievable reward for these tasks this paper presents an interesting and novel idea at the intersection of deep hrl novelty based exploration and reachability the experiments are sound but could require further clarification and expansion of scope the clarity of the paper can also be improved to more directly address the need and important of stability regularization docsepthe authors propose a hierarchical rl algorithm which augments an existing contrastive learningbased subgoal representation objective with heuristics for exploration the proposed algorithm seeks to reduce representation drift over the course of learning by penalizing the learner for modifying phis for states s with low contrastive loss furthermore the authors propose exploration heuristics that encourages the learner to explore in promising areas of latent space by combining countbased novelty and potential measures the proposed algorithm is demonstrated to have the desirable properties and outperforms existing methods the analysis is complemented by an ablative analysis that disentangles the effects of each proposed mechanism pros 1 comparisons between the proposed method and other hierarchical methods demonstrate that the algorithm results in better performance 2 the authors performed thorough ablations demonstrating the impact of each proposed component of their algorithm cons 1 the authors do not explain how the counts and potential measures are estimated from data in particular 1 how are the cumulative counts ns in 4 estimated given pihier is changing over the course of training 2 how is ugt estimated from buffer data given that the expectation is calculated with ge being set as a subgoal for the policy and thus would not have been observed in the actual environment rollouts 2 why is prioritized sampling used in equation 3 the motivation on this point was not really explained in detail 3 for the ablative analysis it seems like it would be better to evaluate reactive exploration using cumulative counts instead of immediate counts to better isolate the impact of reactive exploration versus learning a policy to maximize the same intrinsic rewards clarification questions 1 why does choosing lambdas as a continuous function of the representation loss impose heavy computational demands it seems like the losses are already being computed in the process of obtaining the triplets with minimal representation losses 2 how is the latent space partitioned into cells if there are no knowns bounds on lvert phis rvert a priori 3 in motivating the potential measure the authors claimed that the novelty measure is a mixture of counts in the past and current representation spaces but it is unclear why this is the case if one can easily recompute n when phi changes 4 how is the lowlevel policy training done is hindsight experience replay used overall i vote for a weak accept the ideas in the paper are interesting and the experimental evaluation is thorough and demonstrates the benefits of the proposed algorithm however the work could benefit from a more detailed description of how the relevant measures are estimated as well minor changes to the experimental procedure docsepthis paper proposes a new algorithm for goalconditioned hierarchical reinforcement learning that is able to succeed in tasks with sparse rewards differently from most other methods in the field it does so through two innovations 1 a representation learning procedure that is more stable and 2 a exploration strategy that takes into consideration not only novelty but also reachability specifically the representation learning procedure is based on what is now a standard a contrastive loss but it is augmented by a regularization term that make the learning procedure stable where the representation is already satisfactory allowing goal sampling to be more effective figure 6 is a particularly nice visualization of the impact of this regularization term the exploration strategy to sample goals to be visited is also novel instead of using goal visitation counts this paper proposes the idea of using expected sum of state visitation counts from that state onwards capturing some notion of long term novelty moreover the exploration bonus also has a potential term that captures how promising each goal state is in terms of how far from the goal state the agent is expected to end up quantitative impact is reported in figure 7 but i particularly liked the intuitionsvisualizations provided in figure 8 this paper is really well executed it builds on top of an already complicated architecture adding more than one new component to that architecture but it does so while providing proper intuitions for each one of these new components and more importantly actually doing ablation studies that quantify the impact of each component to me section 54 is the highlight of the paper i also appreciated section 55 which shows how the paper is also concerned with stability over different parameters introduced by the proposed metric i think the paper would benefit from further clarifying some parts of the text but otherwise this is a good paper specifically in the introduction for example it is said that methods based on visit count are reactive exploration methods that need to learn how to maximize the intrinsic rewards before performing exploratory behaviors i dont necessarily disagree with that although the whole idea of visit counts is to incentivize these exploratory behaviors my question though is isnt this exactly the same with the proposed idea it does use counts and not only that but also expected state visitation counts for the trajectory which is even more demanding in terms of having to visit the state first in section 2 when defining uphist the distance is defined to be between gt and phist for the proposed algorithm should it be ge instead of gt still on section 2 it is said we concatenate states and state embeddings together as the lowlevel states what does this actually mean what are the states here for images for example would it literally be all pixels on the screen in section 31 i dont think lambda0 in section 32 it is said lowdimensional continuous latent space into discrete cells ie the state embeddings are mapped into cells containing them what are these cells how were they defined i can imagine this is somewhat straightforward to do if you assume you have access to xy positions but how is this done in higher dimensional settings how are these cells defined for images for example in section 32 when discussing the potential measure it is said that figure 3 demonstrates that with online representation learning the novelty measure is a mixture of counts in the past and current representation spaces so it might mislead the exploration how is that i couldnt understand what i should be looking at in figure 3 to reach this conclusion in section 33 it is said that the active exploration strategy avoids the nonstationary issue how arent these reward signals changing constantly based on counts and the representation being learned how does the active exploration strategy actually avoids the nonstationarity issue in section 4 it is said that bottomup hrl works learn a set of diverse skills or options in a selfsupervised manner and use those semantically meaningful lowlevel skills to explore in downstream tasks but those methods may produce some redundant and useless skills this claim is not backed up by any reference or experiment why is this true when some of these methods explicitly ask for diverse skills that are not supposed to overlap to each other in figure 4 how were the confidence intervals computed if only 5 samples were available in section 52 it is said the successor representation estimates the expected future state occupancy starting from a given state kulkarni et al 2016b but not the visitation number of the given state which is less helpful to promote exploration however isnt this exactly what hsr shows that the ell1 norm of the sr captures the visitation number of a given state moreover the reference to the sr should be peter dayan improving generalization for temporal difference learning the successor representation neural comput 54 613624 1993 no details were given on how figure 6 was generated i dont know how to reproduce it importantly in the ablations were the parameters of the ablated methods tuned this paper is really well executed it builds on top of an already complicated architecture adding more than one new component to that architecture but it does so while providing proper intuitions for each one of these new components and more importantly actually doing ablation studies that quantify the impact of each component to me section 54 is the highlight of the paper i also appreciated section 55 which shows how the paper is also concerned with stability over different parameters introduced by the proposed metric i think the paper would benefit from further clarifying some parts of the text but otherwise this is a good paper ### Summary:
the paper proposes a new goalconditioned hierarchical rl method aimed at improving performance on sparse reward tasks compared to prior work the novelty lies in a new way of improving the stability of goal representation learning and in an improved exploration strategy for proposing goals while taking reachability into account the paper does a good job of motivating the main ideas around stability and combining novelty with reachability reviewers found the quantitative evaluation and the choice of baselines to be good with the exception of not including feudal networks which the authors explained was due to poor performance on the hard exploration tasks something that has been observed in prior work reviewers also found the thoroughness of the ablations and insightful visualizations to be highlights overall reviewers were unanimous in recommending acceptance which i support
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this paper presents a framework for performing both differentiable physics simulations and differentiable rendering this fully differentiable simulation and rendering pipeline is then employed to perform system identification tasks directly from video frames being able to match or outperform both visualbased and statebased baselines moreover the potential of this framework to be applied for visuomotor control is also demonstrated pros this method unified advances in the differentiation of both physics simulation and rendering the experimental results demonstrate a good ability to perform system identification for diverse parameters and control directly from videos the ability to identify parameters or direct control tasks directly from images is useful since it reduces the need for direct supervisionannotation in the form of state information the presented simulator supports a variety of domains such as rigid and deformable body dynamics cloth simulation and these are efficient enough to be run faster than real time at least for simple tasks cons overall the proposed method is mostly a unification of preexisting techniques from different fields such as differentiable rigid and deformable body dynamics differentiable rendering the paper itself admits that a limitation of this method is that it currently has limited capability to handle contactrich motion that introduces a large number of discontinuities which limits its applicability to realworld scenes it cannot also currently handle joints all of these would be important for possible robotic applications for example the tasks demonstrated in the experiments are simple and issues from model mismatch does not seem to have been thoroughly evaluated see comments below for more reasons for score edit score updated see discussion below overall given the pros described above notably the interesting results achieved for system identification and control directly from video frames by combining differentiable physics and rendering into a single framework i recommend this paper for acceptance given some of the concerns raised in the cons and in more detail in the comments below i for now will score this paper as a little above the acceptance threshold additional comments the scenarios used for the system identification and control tasks are fairly simple with usually only a single object and few contact points was the ground truth for the scenarios in the system identification tasks generated using gradsim itself if so isnt it unfair that it is compared to other models eg pybullet for which there would be model mismatch while not mismatch would be present for gradsim along the same direction the experiments present a section on impact of imperfect dynamics and rendering models it would also be interesting to see a quantification of the impact of model mismatch possibly both while using the same renderer ie only dynamics mismatch or also different renderers in the experiments section it is said that inference is done by picking an initial guess of the mass at random from what distribution is this random initial guess picked from what are these starting guesses in relation to the true parameters the section on impact of shading and texture cues seems a little too short which renders it hard to understand in detail what is going on docsepthis work presents a fully differentiable physics simulation coupled with neural rendering such that input video can be used to estimate object properties or find control policies to move those objects by trying to generate the same video at the output the paper is well motivated by presenting a natural progression of ideas from this literature and it does a thorough job discussing related work the paper is light on details in section 3 and it is necessary to refer to the appendix to get a complete picture overall the technical contribution is solid and thus worth accepting the paper even if the validation is with relatively simpler experiments since they are sufficient to motivate this direction to be further researched below are a few comments to aid in improving the current work all experiments use what i am guessing are input desired videos from the same pipeline and then later hiding some parameters to be learned while this is a good validation the learning done here is still in distribution it would be useful to see if video even simplistic from a different simulator or simplified from a real world video could be applied to what extent is this possible and are there any fundamental limitations that prevent this at the moment analysis is mostly with one object in an empty scene are there technical limitations to handling realistic scenes where there are multiple objects and those objects interact with each other as well the environment how does this affect performance wrt forward and backward pass timings with such experiments it would be helpful to understand if the released code can be easily extended to such more complex settings or if someone would need to start a new implementation from scratch the scale on the loss landscape is quite small 04 pixelwise mse how good does the initial guess need to be to stay in the range do the curves in fig 3 continue the trend beyond these values for larger error reality gap while this is discuss in reference to visual appearances since the current experiments deal with synthetic scenes the more relevant topic to discuss is the reality gap wrt physics and object motions experiments designed to study this would boost confidence in this approach other comments how much does the performance depend on good initial guess currently a single impulse is used to set things in motion can this be extended to handle more continuous actions presenting qualitative results for baselines would be helpful some baselines not clearly explained average random convlstm how does performance scale with the length of the videodocsepthis work focuses on the problem of estimating object physical properties from video sequences the proposed framework combines differentiable physical simulations and differentiable rendering to map physical parameters into images differentiably this paradigm is then used to recover physical parameters from image sequences by means of gradient based optimisation validation of the proposed method is carried through two main synthetic applications parameter identification and visuomotor control although the proposed approach still requires 3d ground truth information to yield reliable estimates it is and encouraging step towards unsupervised physics understanding from imagevideo data positive crucially and differently from previous attempts the proposed approach does not require 3d supervision except for geometry and appearance of the static scene ie at t0 approach is clever simple and yields interpretable representation first step towards physics understanding from videos negative i would improve the quality of the visualisations and plots in the paper eg i found figure 6 impossible to read how to differentiate through the physical simulator was not obvious to me i would have appreciated a more detailed explanation of how that is done in practice for one of the physical problems studied in the paper to be included in the main manuscript in an effort to make the paper more readable ### Summary:
this paper presents a framework for joint differentiable simulation of physics and image formation for inverse problems it brings together ideas from differentiable physics and differentiable rendering in a compelling framework
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors present a new concentration of measure inequality for sum of independent bounded random variables namely splitkl inequality they derive this new inequality by combining klinequalities 1 and 2 in a clever way they provide empirical cmparison of this new inequalities with the existing concentration inequalities such as klinequality empirical bernstein inequality and unexpected bernstein inequality they show that their new inequality is tighter than all of these inequalities in some regimes they further extend their contribution to pac bayes setting and derive pacbayessplitkl inequality again they empirically in synthetic and real world data identify regimes where their inequality performs better than other existing inequalities such as pacbayeskl pacbayes empirical bernstein pacbayes unexpected bernstein and pacbayes empirical bennett inequalities strengths the paper is easy to follow and claims stem from logical arguments the experiments are extensive and support the claims made by authors theoretically the idea is simple but interestingly it leads to good empirical results weaknesses it is difficult to understand that how is this new inequality fundamentally different than the kl inequality without a careful choice of mu i am not sure if this new inequality would always be tighter than kl inequality in all the regimes my observation comes from the following argument consider z in a b take mu a then z za and z 0 similalry take mu b then z 0 and z b z in both these cases we are just translating z and both kl inequality and klsplit inequality should behave similar for these choices of mu of course there might be a clever choice of mu which makes one perform better than the other but i am not sure how to make that choice the limitations are discussed adequately docsepthe authors introduced a new approach to a concetration inquality for random variables over a bounded interval called split kl inequality which first decomposes the original random variable into three terms and then applies an existing bound kl ineqaulity to the decomposed terms then the authors proposed to use the split kl inequality for pacbayes bounds of generalisation error of learning alrogrithms as well as to combine it with existing approaches of excess loss and informed prior the derived pac bayes generalisation error bound were compared and examined in a few different experiments the reviewer is personally very much fond of the authors writing in this paper which explains important matters of this work other existing works in an intuitive and comprehensive manner for example the motivation of this work is nicely lined up with a proper technical level to wide audiences in introduction in addition the advantage of split kl inequality has been made clear in figure 1 comprehensive presentation and simplicity of the idea is a clear strengh of this work my main concern is the significance impact when we combine this idea with pacbayes bounds the derived new generalisation bound in figure 2 3 seemed similar to the other existing bounds at first glance or it was unclear how to interprete the improvement level for the first experiment for example since the authors combined their idea of split kl inequality with existing approaches of informed priors some might get an impression from these figures that the informed prior part has already finished the majority of works to lower a bound in each bound and they may wonder about how critical the improvement by the split kl part is there would not no concern for potential negative societal impact to me personally the current limitation is that it is difficult to interprete from experiments or equations if the proposed idea of pacbayessplitkl inequalities has imporved the generalisation bounds to a fair defree or not for example would the difference of number in the figures be significant in the context of pacbayes the reviewers position on this paper is neutral and the reviewer is happy to increase the score if the technical or practical impact is well justified docsepthe paper introduces a new concentration inequality for the sum of iid bounded random variables the paper uses a technique of splitting the samples with a threshold and then using a klinequality on each part this splitting allows using both the lower and upper bound klinequalities the resulting bound enjoys both the tightness of the klinequality and the ability to exploit the lower variance of rv that takes values within a segment the empirical comparison clearly shows how the tightness of the new splitkl bound in different regimes compared to the empirical berenstein and the standard kl inequalities the paper then derives pacbayessplitkl inequality and applies it to the excess loss of a binary classification problem the new bound exploits the lowered variance of the excess losses compared to the binary losses and therefore the overall splitklpb bound can be competitive with the standard klpb bound as demonstrated on synthetic and realworld data strengths 1 i believe the work is original and wellmotivated 2 the use of the splitting technique is clever and novel as far as i know 3 the paper is wellwritten and clear 4 the authors provide an adequate survey of related work 5 the empirical evaluation of the splitkl inequality clearly shows its merits weaknesses 1 the empirical evaluation of the splitklpacbayes bound does not seem to give definitive conclusions besides the looseness of pacbayesempiricalbennett on certain datasets i suggest adding more controlled synthetic experiments as were done in fig 1 for the concentration bounds since it can give good intuition to when certain bounds are preferable no additional limitations docsepthe authors address the question of providing pacbayes bounds for losses when the empirical variance is low as previously addressed by eg 1 2 a special case of this is finding bounds for ternary losses in 101 which arises in two important ways 1 bounds on the excess misclassification loss which can also be used as per 1 to tighten pacbayes bounds on the nonexcess loss 2 in conjunction with the cantellichebyshev relaxation given by 3 to provide bounds on the nonrandomized weighted majority vote via pacbayes for losses in 0 1 the smallkl pacbayes bound eg 4 is usually the tightest even when the variance is low but not for losses in 1 1 after rescaling the bound in order to leverage this the authors decompose translate each random variable in the sum before decomposing it into positive and negative parts zi mu zi zi mu max0 zimu max0 zimu before applying the smallkl bound to the sums of zi and zi separately which are both 0 1 valued in the ternary untranslated case this is called the splitkl pacbayes bound this is used to prove new concentration and pacbayes bounds these are further combined with the excess risk and informed prior ideas from 1 or the cantellichebyshev relaxation from 3 and evaluated in experimental setups taken from the above 1 zakaria mhammedi peter grnwald and benjamin guedj pacbayes unexpected bernstein inequality 2 ilya tolstikhin and yevgeny seldin pacbayesempiricalbernstein inequality 3 yishan wu andres masegosa stephan lorenzen christian igel and yevgeny seldin chebyshevcantelli pacbayesbennett inequality for the weighted majority vote 4 john langford tutorial on practical prediction theory for classification update overall i am not satisfied with the quite limited evaluation of this bound which does not show clear improvements from previous results this weakens the motivation for the paper too because of the limited number of new technical ideas therefore i find myself much more on the borderline than my original review and i do agree with some of the criticisms of reviewer nl9t however given that related work has previously appeared at neurips with similarly negligible empirical improvements i will keep my weak accept score strengths clarity and motivation the paper is very well written and was a pleasure to read the relationships to previous works 1 2 was very well explained and the incorporation of ideas from 1 was well motivated the alternative form of the main result from 1 is an improvement in clarity to how it is stated therein and the situation of this work within its wider context was reasonably clear my only minor criticism is that the experiments in section 42 do not sufficiently explain the use of the chebyshevcantelli bound and majority votes as used there this is a shame as i think the use of the splitkl bound for majority votes is a good use case relevance i think that the paper makes a contribution to an important and highlyactive area of machine learning improving pacbayes bounds which are among the most useful in contemporary learning theory they bring some ideas from 1 to a wider application which is a valuable contribution weaknesses technical contribution and originality here i think the paper falls down a bit the main technical result is simply a decomposition of a random variable into positive and negative parts combined with an application of the smallkl pacbayes inequality this is combined with the excess loss idea from 1 and the experimental setup therein or the cantellichebyshev bound from 3 and their experimental setup all of which is straightforward such simple ideas are can be very valuable when they lead to breakthroughs but that does not seem to be the case here and most of the ideas used in the paper and discussed at length were originated by 1 experimental results in the more important pacbayes setting the new results are quite weak with the new bound giving very similar results to that of 1 the bound is not shown to be any improvement as optimization objective either the simpler concentration inequality setting is not particularly interesting except as a motivation and for the ternary rvs used an even better bound would be obtained by applying the test set bound th 8 to the decomposition z z z ie a splitbinomial bound na the results are primarily of a theoretical nature ### Summary:
this meta review is based on the reviews the authors rebuttal and the discussion with the reviewers and ultimately my own judgement on the paper there was a consensus that the paper contributes an interesting new concentration of measure inequality and derive a useful pacbayes inequality i feel this work deserves to be featured at neurips and will attract interest from the community i would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers congratulations
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work claims to propose a general methodology for approximating offline algorithms in online settings in contrast to previous methods only for particular cases to achieve this the author prosed a multitasksbased method to learn from the datasets created by the offline algorithms experiments are conducted to verify the idea strengths 1 the motivation of bridging the gap between offline algorithms and their online counterparts is clear and practical realworld examples are discussed in the introduction and conclusion and help to further understand the motivation 2 the proposed approach is novel to my knowledge i admire the idea to capture the behavior structure by multitask learning model which is interesting to create datasets using offline algorithm for training the online counterpart 3 the design is clearly presented figure 1 2 are helpful to understand the highlevel framework weakness 1 why no baselines are presented in the experiment part i am not an expert in this field so i am not entirely convinced that it needs any comparison of other benchmarks 2 is there any theoretical guarantees or insights behind the design 3 i personally think that the paper writing can be further enhanced for example 1 the sections and subsections does not follow a traditional manner eg the experiment and experimental results are not in one section the ethics is a subsection of conclusion 2 although the authors claim that the proposed method outperform the sota however the performance of the sota model is not present in the table minor we review this limitation more thoroughly in section in page 6 section 
 i admire the motivation idea and possible impact of this paper however i am not entirely convinced that the experimental results are convincing enough i would like to update the score after interacting with the authors and other reviewers docsepthis paper makes use of offline algorithms ie algorithms that can view entire timeseries to produce outputs which are used to train an online algorithm ie an algorithm that can only view past values of a timeseries the online algorithms are not trained to produce the outputs of the offline algorithm directly instead windows of the outputs are mapped to class labels using a handcrafted mapping specific to the domain the online algorithms are then trained to predict the class labels of the current window and progressively forwardlooking windows ie a multitask prediction problem given a window of the timeseries they apply this method to synthetic and realworld timeseries data historical stock market data and report the classification accurately of each of the multitask prediction problems they also mention this can be used to predict the direction the price of a stock will move and state that their method is competitive with stateoftheart ml methods on this task strengths the paper describes their method experimental setup and results very clearly the paper presents an interesting research direction using knowledge from offline algorithms to improve performance of online algorithms via learning the paper highlights that leveraging these methods could be impactful for many domains weaknesses the primary weakness is the main claim seem incorrect the authors claim to develop a general framework for approximating offline algorithms using online algorithms but the online algorithms trained in this paper do not directly attempt to approximate the offline algorithms the online algorithms do not even produce the same type of outputs as the offline algorithms the offline algorithms take timeseries xx1 xt as input and produces outputs in the form of decision points ax xi axi xj axj an online algorithm that approximates this could take as input a partial timeseries xt xtd1 xt and decide whether or not to produce a decision point at time t instead the online algorithms in this work predict class labels which are a lossy mapping of sequences of decision points as a result it is not clear to me in what sense these online algorithms approximate the offline algorithms can you clarify this for me in what sense is it approximating the offline algorithm if classification accuracy is 100 can we make any statements about how good the approximation is because the primary claim is not clear its not clear how to evaluate the proposed method or what baselines to compare to the related work section is short and only mentions offline to online conversion and explanatory vs predictive models since the paper also mentions timeseries forecasting it would be good mention related work in that field too suggestion may i suggest the following claim the paper develops a method which leverages offline algorithms to perform better online timeseries prediction then the main evaluation metric should be timeseries prediction and baselines would include a range of methods for timeseries prediction and ablations which use offline outputs in different ways but have similar architecture it would be good to report the performance of comparable ml timeseries prediction algorithms trained on the same data and with similar architectures currently the authors mention another paper but do not report numbers for it related in introduction paragraph 2 you compare your method to timeseries forecasting techniques and mention 3 benefits of your technique which focuses on behavior vs techniques that directly predict timeseries trajectories it would be good to see this demonstrated experimentally while the ideas presented in this work could be very impactful as the paper is currently written its main claim seems incorrect which is grounds for rejection the paper claims to develop a general framework for approximating offline algorithms using online algorithms but to me it seems the online algorithms do not approximate the offline algorithms i think the paper could be made substantially better in one of two ways 1 the authors clarify in what sense the online algorithms approximate the offline algorithms 2 the authors modify the claims to more accurately reflect the the method and add additional experiments to support those claims docsepthe paper presents a novel method for approximating offline timeseries algorithms in an online setting the method achieves this by assigning each window of the timeseries data to a set of discrete classes based on the behavioral structure in that window where the behavioral structures are encodings of the relative placements of the decision points in that window as determined by an offline timeseries algorithm these classes then provide the targets for a series of connected classification problems an approximate online algorithm is obtained by training a multitask classification neural network to solve these results on onedimensional synthetic and stockmarket data show that the predictive behavior of this method matches our intuitions where it is most accurate when explaining the data and least accurate when predicting into the future novelty and significance i am not an expert in this domain but to my knowledge the proposed approach is novel and presents an interesting method for using offline algorithms to create datasets for training machine learning models to approximate the outputs of the offline algorithms i think the idea could be of interest to the community that said the paper does not provide any way to evaluate the significance of the proposed result as there are no empirical or theoretical comparisons to any other methods thus it is impossible to situate the proposed method either relatively or absolutely to determine whether the method will be of any benefit to the community the paper presents two datasets a synthetic toy dataset and a constructed dataset of historical stock market data neither of which seem to have been used in the literature before and trains the proposed method on these datasets but compares to no other methods the results show that the method has higher accuracy for the easier classification tasks and lower accuracy for the lower harder prediction tasks and that the method seems to get above chance accuracy on most problems but this does not tell the reader anything about the overall performance and behavior of the algorithm in future revisions of the paper the authors should compare to other algorithms in this same space a reasonable place to start is with the works discussed in the related work section you can show generality by taking the proposed algorithm and comparing it with multiple different existing approaches on the different tasks that each of those existing approaches works on if the scores of the proposed approach are reasonable then we will have some evidence that it works as claimed i urge the authors to also perform ablations on the method what effects do changes in model architecture have or how does the choice of offline algorithm affect the method how do variations of the synthetic dataset affect the proposed approach as opposed to other methods ie is it more robust or more accurate in particular regimes such as different values of n s gamma and d note that two of the three proposed values of gamma 0 05 1 are trivial and thus do not provide much information i encourage the authors to also include gamma025 and 075 to better show trends and to plot these values instead of just putting the numbers in a table further showing topk for k521 seems unnecessary 5 and 1 would be sufficient regarding the claim of meeting or exceeding performance on mlbased stock prediction systems there is no evidence given in this paper for this claim so it is unsubstantiated as i understand it the cited paper rezaei et al 2021 uses an entirely different dataset so comparisons of accuracy are meaningless clarity overall the method is fairly clearly explained and the remainder of the paper is clear i think the paper would benefit from providing a summary of the method at the beginning of section 2 and from some changes to notation to simplify the presentation and to fix some issues with the notation the precise method to generate the synthetic dataset and create the stock market data should be detailed in the paper as well without requiring readers to go to the not yet provided code detailed questions and comments preprocessing both the train and test splits together is wrong as it allows information to bleed from test to train both in the form of the normalization and the set of structures trained on all preprocessing should be performed only on the train data the statistics retained and then these used on the test data the fact that the number of unique structures s changes for different values of gamma makes it difficult to compare trends across values of gamma instead i would suggest the authors change the dataset generation process to first specify an alphabet of structures s and then generate noised trajectories from this alphabet it appears that lambda is used both as an index and a count but the count value of lambda always equals n so why not just use n defining s k is confusing as k is already and typically used as an index variable and it is nonstandard and unclear to use it as a count please define a domain for the class labels and use that directly to simplify notation the definition of a window seems to assume that decision points are uniformly spaced but this is not made explicit anywhere the definition of the estimator f in eq 2 does not match the text as it should be mapping onto the simplex of the class label domain based on the corresponding text please explain the method for computing the decision points l1tf in more detail overall this paper lacks an evaluation for the proposed method and thus cannot be accepted the proposed approach seems interesting and i encourage the authors to resubmit after incorporating a proper evaluation by comparing to other methods on established datasets and addressing some of the other comments above in particular the dataset issues docsepthis paper considers the problem of an offline algorithm that operates on a timeseries x to obtain sequence of decisions in an online setting that is it tries to approximate the behavior of this offline algorithm in a setting where at time t the algorithm only has access to the input until t whereas in the offline algorithm the algorithm can lookahead and optimize the pose this as a multitask learning problem where they slice the input into windows of size d and the goal is to map each d dimensional window to one of the k possible structures in the dataset they propose a mtl algorithm and use simulations and realworld stock market data to study the effects of their approach strengths a novel formulation and research topic the idea of trying to predict the behavior of an offline algorithm in a online setting using multitask learning is a new approach the exact formulation and the the way to pose this as mtl is nontrivial the bulk of the contributions of this work is to make this modelling approach once figured out the proposed algorithm itself is standard multitask learning this paper contributes to the now growing line of work on bridging classical algorithms with machine learning in that line of work this considered approach is novel it gives a new perspective the typical direction has been to use the ml model as hints to improve the onlineoffline algorithm on the other hand here the online to offline algorithm is bridged via a machine learning task for the most part the paper is clear and wellwritten weakness the first main weakness i find in this paper is that it does not sufficiently motivate the problem well in particular the online problem and it being posed as a mtl seems very abstract to the reader it is not clear how to use the outcome of this modelling in an actionable form in particular how does one interpret the class prediction for a window what happens if the number of classes are unknownevolving may be elaborating this on a toystandard offline algorithm before making it abstract would help the reader a great bit related to above the formulation makes it seem like this applies to any offline algorithm but it really only applies to offline algorithms that work on timesegmented data so it comes of as overselling the main contributions of the paper please correct me if i am wrong if not i would reword the introduction to make this aspect very clear i like some of the ideas of this paper but overall i think that it falls just below the bar because of the reasons i stated in the weakness please correct me if my understadning is incorrect docsepthis work studies a methodological framework to transformapproximate offline algorithms into their online counterparts the main methodology is to predict an offline algorithms actions in the real time future via learning behavioral structures of the offline algorithm using past data the work presents several experiments using both synthetic and real stock market data in general i like the idea of approximating the behavior of offline algorithms through the lens of multiple progressivelyforward looking tasks which essentially predicts the trajectory of future actions of the offline algorithm since this allows us to predict further into the future as opposed to predicting onestep ahead in standard ml methodologies to the best of my knowledge the idea of encoding the behavior of offline algorithms in graph structures and then predicting the occurrence of such structures for multiple actions ahead via a multitask learning framework is novel the following are some questionsconcerns 1 from my understanding the ultimate goal of the whole paper is to approximate the behavior of offline algorithms in real time as opposed to directly predicting the ground truth evolution of the time series this seems to me that the proposed frameworks performance is primarily driven by how well the offline algorithm can fit the historical data that being said if the offline algorithm significantly overfits the offline data eg some complex deep neural network does this mean the offlinetoonline framework can also perform arbitrarily well under certain conditions if so i find this hard to believe i might be misunderstanding something here and it would be great if the authors can provide some more explanations and insights in the paper eg what are some key drivers for the proposed frameworks performance and how does the proposed frameworks performance relate to that of the offline algorithm 2 from a practical perspective it seems to me that the algorithm is very data hungry as the number of structures may grow exponentially in the number of decision points in each structure hence i believe there is this inherent tradeoff between the amount of data required for labeling structures and how far we can predict into the future the paper seems to be lacking detailed discussions for this tradeoff or on a related note for how one should choose the optimal number of decisions within a structure 3 i am confused about the occurrence moments of predicted future actions since the proposed algorithm is predicting x actions ahead instead of x moment ahead consider the stock market example where we have task 1 that predicts 1 action ahead of some offline algo and task 2 that predicts 2 actions ahead how do we know that the last predicted action in task 2 is further away in the future than the single prediction action in task 1 in other words from my understanding the predicted structures are completely agnostic to actual occurrence moments and hence we cannot compare prediction actions across tasks i might have missed related discussions in the paper and it would be great if the authors can add some more emphasis 4 i find the discussions in section 2 general schema quite difficult to digest at first read and not until i went through the entire paper did i better understand how the multitask learning framework works perhaps instead of discussing pure concepts eg structure actions etc introducing the methodological framework within the context of a simple concrete example eg a simplified version of the stock market example with some dummy offline algorithm would improve the overall clarity of this section to the best of my knowledge the proposed offlinetoonline framework by predicting behavioral structures of the offline algo through a multitask learning scheme is novel for weaknesses more explanationsdiscussions on the following aspects would improve the paper 1 how the performance of the proposed framework relates to that of the offline algorithm 2 choice for number of decisions in a structure 3 comparing predictions across different tasks the papers exposition in terms of explaining the key concepts can also be improved ### Summary:
a brief summary this paper uses offline algorithms that can see the entire timeseries to approximate the online algorithms that can only view the past timeseries the way this is done is basically the offline algorithm is used to provide discrete class targets to train the online algorithm the paper presents results on synthetic and historical stock market data reviewer s1h9 strengths practical problem novel approach clear presentation weaknesses no other baselines no theoretical guarantees behind the approach writing could be improved reviewer egw9 strengths clear writing interesting research direction weaknesses the primary claim seems incorrect and unclear due to the unclarity about the primary claim of this paper it is difficult to evaluate the paper lack of baselines the lack of discussions of the related works reviewer gii5 strengths interesting and novel approach weaknesses difficult to evaluate with no empirical baselines or theoretical evidence the datasets used in the paper are not used in the literature before authors should provide experimental results on datasets from the literature as well the paper needs to compare against the other baselines discussed in the related works more ablations and analysis on the proposed algorithm is required unsubstantiated claims regarding being sota on the task since the paper doesnt compare against any other baselines on these datasets the paper can be restructured to improve the flow and clarity reviewer zokr strengths novel and interesting research topic bridging classical algorithms and ml clearly written weaknesses lack of motivation for the problem the approach only works with offline algorithms that work on timesegmented data reviewer aafn strengths novel algorithm weaknesses potentially overfitting to the offline data data hungry approach confusion related to the occurrence moments of predicted future actions section 2 is difficult to understand key takeaways and thoughts overall i think the problem setup is very interesting however as pointed out by reviewers gii5 and egw5 due to the lack of baselines it is tough to compare the proposed algorithm against other approaches and this papers evaluation is challenging i would recommend the authors include more ablations in the future version of the paper and baselines and address the other issues pointed out above by the reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper investigates internal working of rnn by mapping its hidden states to the nodes of minimal dfas that generated the training inputs and its abstractions authors found that in fact such a mapping exists and a linear decoder suffices for the purpose inspecting some of the minimal dfas that correspond to regular expressions induced state abstractions are intuitive and interpretable from a viewpoint of training rnns by training sequences this paper is interesting and the central idea of using formal languages to generate feeding inputs is good in fact i am also doing a different research that also leverages a formal grammar with rnn most of the paper is clear so i have only a few minor comments in figures 4 and 5 the most complex mdfa of 14 nodes does not have the lowest testing accuracies in other words testing accuracies is not generally proportional to the complexity of mdfa why does this happen as noted in the footnote in page 5 state abstraction is driven by the idea of hierarchical grammars then as briefly noted in the conclusion why not using a simple cfg or pcfg to generate training sequences in this case state abstractions are clear by definition and it is curious to see if rnn actually learns abstract states such as np and vp in natural language through mapping from hidden states to abstracted states because this paper is exploratory i would like to see more examples beyond only the two in figure 6 is it possible to generate a regular expression itself randomly to feed into rnn docsepthis paper aims to show that an rnn trained to recognize regular languages effectively focuses on a more abstract representation of the fsa of the corresponding language understanding the type of information encoded in the hidden states of rnns is an important research question recent results have shown connections between existing rnn architectures and both weighted eg chen et al naacl 2018 peng et al emnlp 2018 and unweighted weiss et al acl 2018 fsas this paper asks a simple question when trained to recognize regular languages do rnns converge on the same states as the corresponding fsa while exploring solutions to this question is potentially interesting there are significant clarity issues in this paper which make it hard to understand it also the main claim of the paper that the rnn is focusing on a low level abstraction of thew fsa is not backedup by the results comments the authors claim that the rnn states map to fsa states with low coarseness but figure 3b which is never referred to in text shows that in most cases the ratio of coarseness is at least 13 and in some cases 12 clarity while the introduction is relatively clear starting from the middle of section 3 there are multiple clarity issues in this paper in the current state of affairs it is hard for me to evaluate the full contribution of the paper the definitions in section 3 were somewhat confusing what is the conceptual difference between the two accuracy definitions when combining two states does the new fsa accept most of the strings in the original fsas some of them can you quantify that also figure 6 which kind of addresses this question would be much more helpful if it used simple expressions and demonstrated how the new fsa looks like after the merge section 4 leaves many important questions unanswered 1 which rnn was used which model which parameters which training regime etc 2 how were the expressions sampled the authors mention that they were randomly sampled so how come they talk about date and email expressions 3 what is the basic accuracy of the rnn classifier before decoding is it able to learn to recognize the language to what accuracy many of the tables and figures are never referred to in text figure 3b figure 5 in figure 6 there is a mismatch between the regular expression eg 093 and the transitions on the fsa ad how come figure 3a goes up to 11 isnt it bounded by 1 100 the negative sampling procedure should be described in the main text not the appendix also it is not clear how come shuffling the characters is considered an independent distribution docseppaper summary the authors trained rnns to recognize formal languages defined by random regular expressions then measured the accuracy of decoders that predict states of the minimal deterministic finite automata mdfa from the rnn hidden states they then perform a greedy search over partitions of the set of mdfa states to find the groups of states which when merged into a single decoder target maximize prediction accuracy for both the mdfa and the merged classes prediction problems linear decoders perform as well as nonlinear decoders clarity the paper is very clear both in its prose and maths originality i dont know of any prior work that approaches the relationship between rnns and automata in quite this way qualitysignificance i have one major concern about the interpretation of the experiments in this paper the paper seems to express the following logic 1 linear and nonlinear decoders arent so good at predicting mdfa states from rnn hidden states 2 if we make an abstract finite automata fa by merging states of the mdfa to optimize decoder performance the linear and nonlinear decoders are much better at predicting this new smaller fas states 3 thus trained rnns implement something like an abstract fa to recognize formal languages however a more appropriate interpretation of these experiments seems to be 1 same 2 if we find the output classes the decoder is most often confused between then merge them into one class the decoders performance increases trivially in other words you just removed the hardest parts of the classification problem so performance increased note performance also increases because there are fewer classes in the mergedstate fa prediction problem eg chance accuracy is higher 3 thus from these experiments its hard to say much about the relationship between trained rnns and finite automata i see that the accuracy measurement for the mergedstate fa prediction problem rho is somewhat more complicated than i would have expected eg it takes into account delta and fht as well as fht1 ultimately this formulation still asks whether any state in the merged stateset that contains fh transitions under the mdfa to the any state in the merged stateset that contains fht1 as a result as far as i can tell the basic logic of the interpretation i laid out still applies perhaps ive missed something ill look forward to the author response which may alleviate my concern pros very clearly written understanding trained rnns is an important topic cons the basic logic of the conclusion may be flawed will await author response minor the regular expression in figure 6 top is for phone numbers instead of emails average linear decoding accuracy as a function of m in the mdfa i dont think m was ever defined from contexts it looks like its the number of nodes in the mdfa average ratio of coarseness it would be nice to be explicit about what the ratio of coarseness is im guessing its number of nodes in mdfanumber of nodes in abstracted dfa what are the integers and percentages inside the circles in figure 6 figures 4 and 5 are difficult to interpret because the same or at least very similar colors are used multiple times i dont see a as in at in the equations on page 3 defined anywhere i think its meant to indicate a symbol in the alphabet sigma maybe i missed it ### Summary:
this paper presents experiments showing that a linear mapping existing between the hidden states of rnns trained to recognise rather than model formal languages in the hope of at least partially elucidating the sort of representations this class of network architectures learns this is important and timely work fitting into a research programme begun by cl giles in 92 despite its relatively low overall score i am concurring with the assessment made by reviewer 1 whose expertise in the topic i am aware of and respect but more importantly i feel the review process has failed the authors here reviewers 2 and 3 had as chief concern that there were issues with the clarity of some aspects of the paper the authors made a substantial and bona fide attempt in their response to address the points of concern raised by these reviewers this is precisely what the discussion period of iclr is for and one would expect that clarity issues can be successfully remedied during this period i am disappointed to have seen little timely engagement from these reviewers or willingness to explain why they are stick by their assessment if not revisiting it as far as i am concerned the authors have done an appropriate job of addressing these concerns and given reviewer 1s support for the paper i am happy to add mine as well
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes a modification of the saliency mapgradient approach to explain neural networks method summary the approach is as follows for each layer the gradient wrt its input layer is computed for multiple images concurrently then for conv layers the activations are averaged per feature map over space as a result for both fully connected and convolutional layers there is a 3d feature map from these at most b positive outliers are selected to be propagated further what is a bit strange is that in the results section guided backpropagation is mentioned and clearly used in the visualizations but not mentioned in the technical description recommendation the current evaluation is definitely not sufficient for acceptance the evaluation is done in a purely qualitative matter even in section 41 quantitive justification of outliers as relevant neurons the results appear to be interesting but there is no effort done to confirm that the neurons considered to be relevant are truly relevant on top of that it is also evaluated only on a single network and no theoretical justification is provided discussion wrt the evaluation to improve section 41 the authors could for example drop out the most important neurons and reevaluate the model to see whether the selected neurons have a larger impact than randomly selected neurons since the network is trained with dropout it should be somewhat robust to this this would not be a definitive test but it would be more convincing than the current evaluation furthermore high values do not imply importance it might be possible that i misunderstood the experiment in figure 2 so please correct me if this is the case in the reasoning below in figure 2 fc2 is analyzed this is the second to last layer so i assume that only the backpropagation from logits i make this assumption since this is what is done commonly and it is not specified in the paper to fc2 was used since we start at the same output neuron for a single class all visualisations will use the same weight vector that is propagated back the only difference between images comes from which relus were active but the amount if variability is probably small since the images were selected to be classified with high confidence hence the outliers originate from a large weight to a specific neuron the interpretation in the second paragraph of section 421 is not scientific at all i looked at the german shepherd images and there are no teeth visible but again this is a claim that can be falsified easily compare the results when german shepherds with teeth visible are used and when they are not the same holds for the hypothesis of the degree of danger wrt the separation finally there is no proof that the approach works better than using the magnitude of neuron activations themselves which would be an interesting baseline additional remarks the following is an odd formulation since it takes a 3d tensor out of a 5d one and mixes these in the explanation the result of equation for is a 5d relevance tensor omegalni in rhtimes wtimes k the quality of the figures is particularly poor figure 1 b did not help me to understand the concept figure 2 the text on the figure is unreadable figure 4a is not readable when printed docsepsummary this paper introduces stepwise sensitivity analysis ssa which is a modification of saliency maps baehrens et al 2010 simonyan et al 2013 to a perlayer implementation instead of only measuring the importance of input nodes eg pixels to the classification ssa measures the importance of all nodes at each layer this allows for a way to find the important subnodes for each node in the tree given a particular sample it is then straightforward to aggregate results across different input samples and output a dependency graph for nodes novelty the technical contribution is a very simple extension of simonyan et al 2013 the main novelty lies within the created dependency graph from the node importance weights but the usefulness of such graph is unclear in addition the claim that this is the first method that aggregates results of an instancespecific method to gain modelcentric results is a stretch considering other works have found important nodes or filters for a specific class by aggregating across instancespecific samples yosinski et al 2015 evaluation the idea of producing an interpretable dependency graph for nodes is interesting and the possible conclusions from such graphs seem promising however most of the interesting possible conclusions seem to be put off for future work i dont believe the experiments are sufficient to show the significance of ssa the main hypothesis is that dependency graphs allow for a way to interpret the model across samples but it doesnt show any conclusive results about the data or models that wasnt previously known the results are mostly speculative such as the fact that german shepherd and great white shark nodes are clustered together possibly due to the fact that both of these classes share a pdr encoding sharp teeth but that is never actually demonstrated docsepsummary the paper introduces a new approach for interpreting deep neural networks called stepwise sensitivity analysis the approach is conceptually quite simple and involves some interesting ideas but i have some serious concerns whether the output produced by this method carries any meaning at all if the authors were able to refute my concerns detailed below i would raise my score substantially strengths potentially interesting heuristic to identify groups of feature channels in dnns that encode image features in a distributed way weaknesses using the magnitude of the gradient in intermediate layers of relu networks is not indicative of importance no verification of the method on a simple toy example details main issue magnitude of the gradient as a measure of importance i have trouble with the use of the gradient to identify outliers which are deemed important comparing the magnitude of activations across features does not make sense in a convnet with relus because the scale of activations in each feature map is arbitrary and meaningless consider a feature map hlixyf llayer iimages xypixels ffeature channels convolution kernels wlxykf kinput channels foutput channels and biases blf hlif relublf sumk hl1ik wlkf assume without loss of generality the feature map hlf has mean zero and unit variance computed over all images i in the training set and all pixels xy lets multiply all incoming convolution kernels wlf and biases blf by 10 as a result this feature map will now have a variance of 100 over images and pixels additionally lets divide all outgoing kernels wl1f by 10 simple linear algebra suffices to verify that the next layers features hl1 and therefore the entire network output are unaffected by this manipulation however the gradient of all units in this feature map is 10x as high as that of the original network of course the gradient in layer l1 will be unaltered once we backpropagate through wl but because of the authors selection of outlier units their graph will look vastly different in other words it is unclear to me how any method based on gradients should be able to meaningfully assign importance to entire feature maps one could potentially start with the assumption of equal importance when averaged over all images in the dataset and normalize the activations for instance relu networks with batch norm and without postnormalization scaling would satisfy this assumption however for vgg16 studied here this is not the case on a related note the authors observation in fig 4b that the same features are both strongly positive and strongly negative outliers for the same class suggests that this feature simply has a higher variance than the others in the same layer and is therefore picked most of the time similarly the fact that vastly different classes such as shark and german sheppard share the same subgraphs speaks to the same potential issue secondary issue no verification of the method on simple understandable toy example as shown by kindermans et al 1 gradientbased attribution methods fail to produce the correct result even for the simplest possible linear examples the authors do not seem to be aware of this work at least its not cited so i suggest they have a look and discuss the implications wrt their own work in addition i think the authors should demonstrate on a simple controlled eg linear toy example that their method works as expected before jumping to a deep neural network i suppose the issue discussed above will also surface in purely linear multilayer networks where the intermediate layers and their gradients can be rescaled arbitrarily without changing the networks function references 1 kindermans pj schtt kt alber m mller kr erhan d kim b dhne s 2017 learning how to explain neural networks patternnet and patternattribution arxiv170505598 available at httparxivorgabs170505598 ### Summary:
this work proposes a modification of gradient based saliency map methods that measure the importance of all nodes at each layer the reviewers found the novelty is rather marginal and that the evaluation is not up to par since its mostly qualitative the reviewers are in strong agreement that this work does not pass the bar for acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies the problem of predicting the segmentations and poses position yaw orientation of multiple objects given the image of a scene the paper introduces a method that is trained without supervision for the segmentations similar to several other recent objectcentric models in contrast to these existing models the method proposed in this paper additionally estimates the 3d location of each object by predicting a depth map and classifies the yaw angle by representing the pose domain as equallyspaced bins to do so during training the method operates on a short clip of the scene recorded by a moving camera and uses selfsupervision by predicting the scenes image at the next time step at test time the model is able to infer a representation of each object in the scene and segment them given a single image of the scene strength 1 the paper tackles the difficult problem of learning to segment objects from an image using no supervision during training 2 the problem setting and motivation for this task are explained clearly a detailed description of the method along with a pseudocode of the learning algorithm is provided in the paper 3 the paper introduces a new synthetic dataset of images taken from scenes with multiple objects with varying shapes and textures 11 and 15 respectively 4 figures 3ad are very helpful explaining the quantitative performance of the method in relation to the baselines figures 3eg are also helpful showing the failure modes of the proposed method weakness 1 i am not fully convinced that the comparison to the baselines is entirely fair if i understand correctly the rest of the methods were trained on single images without having access to previous and next frames while i appreciate the methods usage of consecutive frames as part of the supervision i think this should be stated clearly when comparing with the baselines to avoid any overclaims synsin 1 for example also predicts the next frames rgb and depth images without using additional supervision similar to the proposed method in this paper synsin synthesizes future images given their camera poses by warping the current frame using differentiable rendering combining an objectcentric approach like slotattention with such a method that performs future image prediction would make a fairer comparison in my opinion 1 wiles o gkioxari g szeliski r and johnson j 2020 synsin endtoend view synthesis from a single image in proceedings of the ieeecvf conference on computer vision and pattern recognition pp 74677477 2 i think there are some key references missing in the paper too for instance a paper from last year also learns an objectcentric representation in an unsupervised fashion to decompose the objects in a scene while estimating their poses in 3d 2 how does 2 compare to the method presented in this paper what are the main differences between them 2 henderson p and lampert ch 2020 unsupervised objectcentric video generation and decomposition in 3d advances in neural information processing systems 33 3 the results are reported using only the dataset introduced in the paper i suggest including results from datasets like the objects room or clevr where the existing methods monet slotattention genesis 2 have already reported results on 4 there are no ablation studies reported in the paper some parts of the loss function seem redundant judging from their current descriptions i think it would be very helpful either presenting additional results by ablating the loss function part of the model or detail in the text why the method needs each of those components recommended decision i think in its current form the paper is not ready to be published i strongly encourage the authors to clarify the positioning of this paper in relation to the stateoftheart see my comment on weaknesses i vote for reject not good enough for now but i would be happy to increase my rating once the authors clear up my concerns additional feedback 1 it should be hatm instead of hatt in the lspatial loss third term 2 what is xrand if they are randomly sampled point positions then what is their underlying distribution 3 have you tested the method on scenes with more than 3 objects slot attention for instance is able to segment up to 9 objects since the paper introduces a new dataset i wouldve hoped to see a more challenging benchmark with a wider variety of objects and number of instances 4 also the dataset assumes that the camera moves with constant pose change have you tested the motion prediction position orientation and their timederivatives in the presence of different camera velocities if not are there any limitations of the method that prevents it 5 i suggest adding the equivalents of figure 3g for each baseline in the appendix 6 what is the number of bins for the yaw angle prediction b have you tried using a continuous representation for the rotation 7 what does the predicted image i for the next timestep look like i suggest including more qualitative results to the paper for evaluating the warping function 8 i think it is a good idea to introduce a more diverse benchmarking dataset for learning objectcentric representations however i think the dataset proposed in the paper should be further expanded by including more daily life objects instead of just geometric primitives like spheres prisms and cones i suggest taking a look at datasets like ycb or shapenet to include more realistic objects to your dataset all in all i think the paper attempts to tackle a very challenging problem the method looks sound and the results might be interesting to the community in learning objectcentric representations however there are some major concerns i have mainly about the position and novelty of this paper with respect to a paper from last year and the lack of results from datasets the stateoftheart methods report on docsepthis paper presents a method to learn how to parse 3frame videos into objectcentric representations which include segmentation masks and 3d positions and yaws for those objects framebyframe as well as an image representation of the background and an overall depth map this is accomplished with a depth network an object network with an lstm at the bottleneck to iteratively pick out objects and their positions and yaws and a decoder to provide segmentations a warpingrecompositing operation that pastes the inferred objects at their estimated positions for the next frame ie with a constantvelocity assumption and finally an imagination network which refines the estimated image the model learns with a combination of 4 losses which include image reprojectionprediction depth consistency a spatial term that includes consistency and randomness though i have complaints about this and finally a penalty term that discourages object probabilities from being zero the paper also introduces a new synthetic dataset where prior methods do badly and the proposed method does slightly better the learned depth maps look good but this is perhaps expected because camera poses are known overall this paper is messily written and proposes something that only works marginally better than prior methods on a synthetic toy dataset the performance in table 1 illustrates this the standard deviation of the segmentation iou 034 is about even with the average iou 035 looking at the qualitative results in figure 2 it seems the model often misses objects completely and produces segmentations that are fractional and have holes i appreciate that the baselines are doing badly here also but slot attention had very similar arifg 042035 vs 046039 statistically maybe these are equivalent is it possible at least to show that this method does better on the datasets where those previous papers managed to work like clevr and those deepmind shapes datasets in general this area of work seems stuck in a setup where all methods work well on different toy data but no methods work on real images or videos it seems like the pose estimation is not working at all judging by figure 3e the discussion mentions this too object spatial location is inferred more easily than object pose which we have not fully investigated thus the predicted warping relies more on object translation than rotation maybe the object pose estimation can be removed from the paper entirely to make things simpler our framework can easily generalize to 3d pose i am sure the formulation can easily be extended to capture this but i would prefer that the wording here be a bit more careful to not suggest that the model is expected to work when pitch and roll are unknowns as well if you do expect it to work please try it out and add the results to the paper at time t given the location of the camera ot r3 and its facing direction t does this mean the camera intrinsics and extrinsics are assumed known it would be great to say this directly the discussion section supports this interpretation saying additional information required is that of observers selfmotion which is available both in the brain as efference copy and easily accessible in vehicles i do not really buy the argument about the brain or easy accessibility in vehicles selfdriving vehicles usually register themselves to a known map inferring pose from odometry alone causes drift the lstm reading objects from the fullframe encoding sounds like a very weak part of the model why not for example use a standard object detector like maskrcnn i know you want to be selfsupervised but then why not selfsupervise a wellknown architecture that is proven to work instead of inventing a new one the location prediction is restricted to the range of possible value within the virtual environment by logistic function and linear scaling i dont know what this means the imagination network is clumsily introduced the first instance of it is already the imagination network and the motivation for it is only written in the caption of figure1 i found a helpful description later on in page5 these things should be rearranged the description of the unprojection of a 2d coordinate into 3d space looks odd to me where is this coming from given that ij are coordinates what do i and j represent normally we start from xfxz and then invert this to xzxf where x is 2d and x is 3d and f is the focal length i am also not sure how the first term and second term here are able to multiply since the first term is a 2tuple ij and the third term is a 3tuple idj i also got lost in the angular velocity equation it seems the sum is over all gamma1 all gamma2 and all differences of the two within 2pi of omega this is too many things to sum over it seems like you wont end up with a valid probability distribution i am probably misreading the notation here in any case why is this probability distribution useful the objectbased imagination network seems to require a scalar here for the rotational speed so why not take the expectation of the first pose distribution expectation of the second and then take a difference it is interesting that you do not use depth supervision but for toy settings like this i think it is ok to assume depth is known and focus on other hard parts like segmentation and tracking i need some help understanding lspatial the second term is described as a contrastive loss but its a difference between known positions and random positions why is this a good idea and why is it contrastive normally a contrastive loss compares two estimates and pushes them apart rather than pulling every estimate to random i was surprised that the evaluation talked about a model called opple only opple shows a bimodal distribution apparently this is the name of the proposed model and the place to learn this is the caption of figure 1 please do not put critical information exclusively in figure captions typos probability mess probability mass generated dataset generated a dataset statesofart stateoftheart network appear to network appears to intersectin in fig3 intersection figure 1 is never referred to in the text i think the paper is not quite ready for publication the method does not work particularly well a part of the method object pose estimation seems to be not working at all and the evaluation is only on a new toy dataset and does not include evaluation on established datasets also the text contains too much notation and has some parts mixed up with terms being used before they are introduced but i think this can be fixed easily docsepthis paper presents an unsupervised objectcentric scene representation technique that can decompose a scene into multiple objects and segment the scene and infer their 3d locations and pose the overall setup is very similar to earlier models like monet but this model works on sequences of images more precisely on 3 consecutive images it uses the first two images to infer the 3d position and pose of objects and combining this with known camera motion tries to predict the last third image the main contribution here is an optical flow based method to warp the image at time t using the predicted object locationposedepth to predict some of the pixels in image at time t1 in more detail the object extraction network outputs the location and pose of each object and a separate depth perception network outputs the depth for each pixel in the image the location and pose of objects are used to estimate the velocity of each object eg by subtracting the position at t1 from position at t note this requires matching each object at time t1 to object in time t which they do using a softmatching approach these along with the depth information are then used to warp the image at t to predict pixels in image at time t1 this is possible only for a subset of the pixels so for the rest they use a separate imagination network that takes in object information and predicts the colordepth and object masks at t1 the predictions from warping and imagination network are then combined to form the final predicted color and depth images to train the model they require images and camera motion and use a combination of losses reconstruction loss on predicted and ground truth image selfsupervised losses on object location pose and depth overall i found the paper quite interesting i think the optical flow based warping to predict some subset of the pixels in the next timestep is in itself an important contribution and this paper would be a good addition to the emerging literature on unsupervised objectcentric models i think the main concern with the paper is limited experimental evaluation the model is evaluated only on a single rather simple dataset that was generated by the authors i know these models cannot usually handle complicated datasets so its fine to have a simple dataset but id have liked to see the model evaluated on some of the datasets that other competing models were evaluated on some of those datasets might not have camera information etc but im sure there are other datasets that have the necessary information eg see 1 2 for some potential datasets also the authors mention that their technique is the first to infer 3d position of objects while segmenting images if im not mistaken 1 also does both and would be a great model to compare against other notes std deviations in table 1 are too large is this a typo if not it looks like all models are doing equally well i dont have any specific recommendations here but i found the model description a bit hard to follow it might be a good idea to do another pass and see if it can be organized better for example the fact that warping and imagination are combined to get the final image can be mentioned earlier so the reader knows wherehow the imagination network is used there are many inline equations these make it difficult to parse the text visually it would be nice to take these out of the text and split the long section 231 to subsections and mark these figure 1 is great but again hard to parse perhaps adding variable names ie x p z etc to the figure might make it easier to understand what goes in and out of each network 1 henderson paul and christoph h lampert 2020 unsupervised objectcentric video generation and decomposition in 3d arxiv cscv arxiv httparxivorgabs200706705 2 kabra et al 2021 simone viewinvariant temporallyabstracted object representations via unsupervised video decomposition httpsarxivorgabs210603849 overall i think the paper is quite interesting and would be of interest to the community however the empirical evaluation is very limited and this makes it difficult to evaluate the full merit of the proposed approach docsepinspired by ideas about how humans learn about objects the authors detail a system to train a neural network to perceive generic objects using image triplets where objects move and the viewer also can move the viewers motion is provided as an input object perception by parts of the network operating on the first two time points is rewarded by predicting what is seen at the third time point the training signal having been trained the object perception part of the system which is a relatively basic neural network can segment them from a single image this new setup requires different data that what has been used in this space and the authors contribute a synthetic dataset as well strengths i like the general thrust of this paper while the basic idea has been around in cognitive psychology which inspired the authors i am not aware of any significant implementation of it this is a nice first step to make this work the authors develop some algorithmic bits that might be useful for followup work the results compare well to others in this space showing that the training strategy has real promise in some sense the methods are not really comparable as the other methods do not have a viable way to make use of the additional training data also see comment about number of objects below weaknesses the paper is harder to read than needed i appreciate there is a lot of stuff going and i believe i was able to get most of it after a few iterations so the lack of clarity is not extreme it is not clear how differing numbers of objects are handled it seems that the number of objects might simply be provided k3 in the dataset but if the number of objects is known then the comparison to other work that infers it might not be fair k is in the pseudo code but does not seam inferred should k be an input a few more details on the lstm would help the paper could use some polishing figure 1 which is informative and key could be tidied up also i am guessing that the objects box is the lstm the english could be improved in places and there are a number of grammar errors eg the first two uses of pixel in 231 should be pixels and this last two terms mean on page 5 the authors do not say whether they will release their code comments a clear limitation of this work is that the data is synthetic and very simple although perhaps more complex than other work in this space the authors acknowledge this in their discussion while this might be the standard for this subarea real data from a robot or car should be relatively easy to get easy blockworldlike real data might be better for pushing the work than adding more texture and diverse lighting in synthetic data using just two time points is both a strength and a weakness with just two there is likely to be a lot of ambiguity between translation and rotation especially if you generalize to more than one pose parameter looking at the effect of larger training sequences would be interesting while the part of the system that is deployed at testing is a simple network there are a number of hand constructed components eg the warping function that make use of what we know about cameras it would be more interesting to see those learned this is a good first step in this direction and should inspire follow up work the technical innovation is sensible the results are good ### Summary:
this paper tackles the difficult problem of learning to segment objects from an image using no supervision during training the paper is clearly written and a new synthetic dataset is made available unfortunately the reviewers raised a number of issues with the submission missing citations and comparison to relevant related work additional baselines ablation studies missing empirical evaluation of the proposed method on standard dataset beyond the toy dataset proposed by the authors the paper received 1 reject 2 marginal rejects and 1 accept but even the positive reviewer agreed that these were limitations the authors also conceded to these limitations and initiated experiments that are starting to address the reviewers comments at this time the results of these experiments remain incomplete and hence most reviewers agree that the paper should go through another round of reviews before it is publishable i thus recommend this paper be rejected in the hope that a subsequent revision will make it a much stronger contribution
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies input length extrapolation for transformer language models ie how transformer lms perform on test sequences that are longer than training sequences the paper finds that how positions are encoded plays a crucial role for input length extrapolation models with sinusoidal and rotary position embeddings do not extrapolate well while t5s positiondependent attention mechanism dubbed t5 bias enables better extrapolation the paper then proposes alibi another attention mechanism that also allows extrapolation while being computationally more efficient than t5 bias these results are empirically confirmed on two datasets strengths to my knowledge the paper is the first to study length extrapolation in transformer language models this is an important open problem for language modeling the proposed alibi mechanism is simple to implement and computationally efficient experiments confirm that the proposed method enables length extrapolation for language modeling the paper is wellwritten and easy to follow weaknesses experiments can be expanded i am curious if the findings also apply to other tasks such as text classification sequence labeling and sequencetosequence generation the proposed method is simple to implement so i imagine it would not be hard to add a few more tasks missing related work xu et al 2021 how neural networks extrapolate from feedforward to graph neural networks this paper studies a similar kind of input lengthsize extrapolation for graph neural networks the paper studies a novel problem input length extrapolation in language modeling and proposes a simple solution with good empirical results the paper is also wellwritten one way to further improve the paper is to add experiments on other tasks overall i recommend acceptance of this paper docsepthe paper addresses the extrapolation problem where a test sequence longer than training sequences is given and proposes attention with linear biases alibi that adds a penalty linear to the distance between a query and a key to the attention scores alibi shows remarkable input length extrapolation ability while computationally efficient with almost marginal overhead compared to the standard transformer moreover alibi does not induce any additional parameters and generalizes well to a billion scale language model the method is simple and quite effective the paper addresses an important research problem of input length extrapolation alibi developed on wikitext103 generalizes to 13b parameter model alibis inductive bias also improves the accuracy previous works did not rigorously evaluate the extrapolation of a transformer and simply assumed the possibility of extrapolation on the other hand this paper carefully measured extrapolation compared with other works rotary and t5 bias and devised their own method to overcome the limitations of previous works the method itself might look less novel or incremental because previous works inspire its many parts it would be much better to provide theoretical explanations more than empirical proof on why alibi enables better extrapolation and higher final accuracy alibi is only evaluated on language modeling in this paper a transformer is a widely used neural architecture for many different tasks and domains they also mentioned in the related work section that other works studied extrapolation on other tasks i wonder about the authors thoughts whether their alibi could be helpful to other tasks as well of course the importance of the longer context and extrapolation ability may vary depending on the task one minor question is that the dot products of queries and keys are usually divided by the square root of the dimension and it is maybe abbreviated in the equation i am curious this division is performed after or before adding a bias each head has a different slope for the linear bias so i expect that heads learn different patterns an analysis of that would be interesting the authors argue that the method is robust to slope choice but they found that other alternatives underperform such as learning these slopes because many other design choices are possible i am curious how they found the final solution and what they tested alibi was tested on two different model sizes according to their results extrapolation on a billion language model improving until 2x is relatively inferior to that on a wikitext103 scale language model improving until 6x i worry whether extrapolation ability reduces as the model becomes bigger or with more training data the paper is well written and easy to follow the contribution is concrete and practically useful since a transformer is a building block of many machine learning models more importantly the size of language models becomes bigger so their training cost is prohibitive alibi improves the efficiency of language model or transformers in general training docsepthe submission proposed an effective approach to allow pretrained transformerbased language models to extrapolate beyond the maximum length used in training which potentially reduces the training time as extrapolation is empirically guaranteed the proposed method adds fixed biases to the dotproduct values between queries and keys that linearly decays wrt the gap between two positions empirically the proposed method indeed successfully allows pretrained models to be evaluated on sequences that are multiple times longer than the training ones without significant loss at a very high level i did enjoy the paper as the method is simple and it indeed helps a pretrained transformerbased models to extrapolate to much longer sequences some of my concerns were addressed in the authors response and the others do require extensive exploration therefore i would like to see this submittion at iclr2022 end of the update 1 when the dimension of a transformer module is roughly the same as or significantly larger than the number of tokens the dimension becomes the main contributing factor to the time complexity which explains why with the linear bias the model only achieved 10 speedup 2 i was wondering if we could directly manipulate the probability after the softmax layer it probably would achieve a similar effect for example one can multiply the probability map with a matrix with 1s in the diagonal terms and with linearly decaying offdiagonal terms towards 0 which also effectively biased the model to learn from nearby tokens my point here is that the submission could have been more generalised in a way that say as long as the bias terms are fixed before training and they have an impact on the attention scores or the probability maps the model will extrapolate to very long sequences this wouldve been a stronger and more generalised message 3 the title and the intro gave me the impression that it was designed for transformers but i was wondering whether it would hinder transformers capability in modelling images or biological sequences where tokens that are far from the current one would still play an important role for images the current approach of serialising an image is either at pixellevel or at patchlevel which means that tokens surrounding the current one in 2dimensional space will be the context however the proposed approach would potentially worsen the situation the submission proposed a simple yet effective method that helps pretrained language models to extrapolate beyond the sequence length used in the training but i think the paper couldve delivered a stronger message i am open to discussions docsepthis paper investigates the extrapolation capability of transformerbased language models the authors observed that existing positional encoding methods eg sinusoidal embedding relative positional embedding fail to generalize to longer sequences in language modeling tasks therefore they introduce a new positional encoding method called alibi which adds temporal bias to the multihead attention to penalize attention score proportional to token distances experimental results show that alibi has significantly stronger extrapolation capability compared to other positional encoding methods pros injecting temporal bias to attention is a neat idea for the language model extrapolation problems this paper presents comprehensive experiments on comparing the proposed method with existing positional encoding approaches the paper is well written and easy to understand cons it would be helpful to discuss the potential applications of the proposed method other than language modeling i am curious about comparing the transformeralibi with lstm in extrapolation tasks the idea of adding temporal bias to attention is similar to the forget gate in lstms therefore adding lstm as a reference will make the paper stronger update the additional results are convincing i raised my rating to acceptance this paper proposes an interesting and novel idea for enhancing the extrapolation capability of transformerbased language models a few additional experiments and discussions will make the paper stronger ### Summary:
this submission proposes a simple efficient and effective position representation method for the transformer architecture called alibi alibi enables better extrapolation and performance in terms of efficiency and task performance the submission also includes careful analysis and extensive experiments and notably suggests that the gains of alibi may be less pronounced in more scaledup settings all reviewers agreed the paper should be accepted i think its reasonably likely that alibi will become a common choice in future transformer models or at the very least that this work will prompt further work on developing improved position representations for transformer models i therefore recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work automatically selects a best detection model while simultaneously controlling the false discovery rate the experimental results shows that the proposed method can control the false discovery rate fdr and the true discovery rate tdr simultaneously this paper is very clearly written and easy to understand i really enjoy reading this paper and it makes interesting contribution the key idea is estimating more stable pvalue for better fdr and then adding extra step ie model selection for additional tdr control its not surprising to see in experiments this method have better tdr than those methods without controlling tdr this paper can be seen a good extension work of bates et al 17 the authors replace the simple split conformal prediction with jackknife technique for more accurate estimated pvalue by fully exploring the clean data and avoid the randomness caused by datasplitting this idea is very straightforward another contribution is selecting the best model from a pool of detectors best here means that the model detected the most outliers in the new dataset which is not novel technique as well overall the novelty of this work is limited the manuscript will benefit from adding explanation about the novelty of such combination of two existing techniques theoretically or practically yes docsepthis paper proposes a general automl framework for novelty detection and controlling the error rate of the model the framework consists of an automated model selection procedure with fdr control the theoretical bound is provided for automs extensive experiments are conducted to demonstrate its effectiveness strengths 1 the paper proposed a unified framework that can be combined with different base detectors 2 the paper provides a theoretical bound of fdr 3 experiments are conducted to evaluate the effectiveness of automs on both synthetic and realworld data weaknesses 1 only several realworld datasets are selected in the experiments as a comparison the previous work metaod has performed experiments on hundreds of datasets the authors are encouraged to conduct a more thorough comparison with metaod none docsepthe authors propose a model selection method for novelty detection with false discovery rate fdr control given a detection model m a detection threshold lm is selected based on the benjaminihochberg bh procedure so that the fdr of m is less than alpha to estimate the pvalues in the bh procedure precisely the authors propose to apply the jackknife estimation which extends the existing work by bates et al after estimating lm for each model m in g the model that most detects the novelties with lm is selected as the best model m the authors also give theoretical results to show that the fdp of m is nonasymptotically bounded and the fdr of m is asymptotically bounded by alpha experiments using synthetic or real datasets demonstrate the advantage of the proposed method against the work by bates et al or metaod strengths hyperparameter tuning or model selection is especially hard in unsupervised settings like novelty dectecion this paper proposes a simple yet effective approach for this problem from the viewpoint of maximizem in g detectionm subject to fdrm lt alpha the control of fdr of m is mainly achieved by the existing framework of bates et al but its jackknife extention is proposed weaknesses the computational overhead of applying the jackknife procedure is not negligible especially when the training set is large experimental results eg fig3 suggest that the fdr control of m gets slightly worse by applying the jackknife compared to the original srs by bates et al postrebuttal comments thanks for the authors response the newly added experimental results and authors response addressed a part of my concerns i have raised my score the computational overhead of applying the kfold jackknife against the original srs should be assessed in the experiment ### Summary:
the paper proposes a method for finding the best anomaly detector among a set of candidate methods that are all based on constructing a score function the selection method is based on a leaveoneout estimate some theoretical results are presented and proven in the appendix and in addition some experiments are reported overall this paper presents a novel and interesting method for an important problem and the theoretical considerations are certainly a plus the only major issue of the paper is that only 4 real world data sets were considered and despite the fact that this problem was raised by the reviewers the authors did not include more during the rebuttal phase from my perspective a strongly theoretical paper does not require extensive experiments but the paper under review does not fall into this category and for this reason more experiments on say another 15 data sets would have been really helpful in summary this is an interesting paper with a sufficiently good theoretical part and some promising experiments the latter could have been more but overall this paper should be accepted
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes a novel neural representation of an given image which decomposes k object instances from background so various tasks such as rerendering rearrangement etc the learning process is first sampling k centers and represent each object as a learnable hidden variable z an gaussian based soft kmeans styple clustering is then performed afterwards the differences here are 1 for sampling the centers there are learnt forground and background priors which can benefit the initial state in this learning process 2 updating the centers z with a learnable gru rathor than simple mean pooling which i believe it have more flexible representations for the cluster finally the object clusters are discoverred in this process the author evaluted with 3 self created datasets and show several reasonable results by performing mentioned tasks such as 3d segmentation rearrangement etc strengths the overall direction is promising and factorize the scene representation is indeed an important issue to study the technique proposed is sound overall with softkmeans like strategy to generate corresponding features in an unsupervised manner through the gru probably break the theoretical guarantee of convergence weak the overall results looks more in a concept proof the objects in all test datasets are relatively simple having uniformed color feel it can hardly work in real scene senario as shown in giraff paper under these senarios the segmented results and rearranged results lose many object details yielding blurry or incorrect renderring i feel there should be more improvement over these issues the overall concept is fairly close to giraff and major difference could be the training scheme inference from a single image or multiple i would like to see there could be additional techniqual improvement especially some high resolution representations or improvement of architectures in order to support better quality some questiions does the algorithm always obtains a reasonable representation wrt different initial sampled centers this paper points out a good direction to dive into unsupervised learning of compositional scene representations however technique strongness and novelty may need to be further improved docsepthis paper utilized the powerful nerf the authors present a new approach to learn the scene arrangement in an unsupervised way the training is performed on unlabeled datasets of similar objects in different arrangements the inference requires only one rgb image as input and can correctly deduce the arrangement and the 3d geometry of the objects the authors showed two supporting technical contributions to the system 1 splitting background and foreground objects leads to better results 2 a coarsetofine training to alleviate the space and time needs the authors showed success on three synthetic datasets and various applications the authors presented a novel idea the system is engineered well and the authors have shown success on three synthetic datasets and various applications my concerns are as follows 1 all experiments are conducted on synthetic datasets both training and testing use the same set of object shapes only the arrangements of the objects are different it is not clear how this approach can generalize to real world scenes as in the real world lots of objects have not been seen in the training set an ablation study that adds unseen shapes into the testing scenes can be very informative additionally a demonstration of the approach on real world scenes would be a strong result to show in the paper either it is negative or positive 2 how is the number of the foreground objects decided does it have a strong impact in the results edit the authors addressed my concerns in their revision in particular the authors showed additional results on a real world image as expected the rendering is not as good as on synthetic data however i do not think this overshadows the contribution of this paper instead it shows the value as well as limitation of the proposed method and can inspire future work the authors presented a novel idea the system is engineered well and the authors have shown success on three synthetic datasets and various applications however all evaluation and experiments are done on synthetic datasets with the same set of objects and background thus it is unclear how this approach can generalize to solve real world problems edit the authors addressed my concerns in their revision i think it is a good paper and should be accepted docsepthe paper introduces an interesting new research direction of factorized 3d consistent neural representations in particular it proposes to combine slotattention mechanisms with conditional neural radiance fields to segment and render novel views of a scene from a single input view the authors also address one apparent shortcoming in the slotattention paper the background and foreground object latent codes are sampled from the same distribution leading to breakdowns on scenes with complicated backgrounds the paper proposes to learn two disjoint distributions one for the background and one for the foreground to alleviate this issue pros 1 the authors address a significant new problem modeling 3d scenes as a disjoint set of objects that can be combined and rendered for novel view synthesis their model can also be learned from only 2d data 2 the proposed method uses stateoftheart techniques to achieve its goal namely it uses slotattention mechanisms neurips 2020 and neural radiance fields eccv 2020 and combines them to address a new problem 3 treating background and foreground latent vectors as being drawn from two separately learnable distributions addresses one of the significant drawbacks of slot attention 4 i appreciated the authors mentioning that concurrent work by stelzner et al 2021 addresses the same issue and differentiates this paper appropriately 5 the proposed method is overall technically sound and code is provided which will help with reproducibility also the authors do an excellent job at mentioning all the hyperparameters and model architecture details as far as i can see 6 the paper provides comprehensive experimental evaluations both quantitative and qualitative negatives questions 1 it would have been great to compare this work to the concurrent work of stelzner et al 2021 i believe that this would help the community to put the two concurrent submissions into context that said i do recognize that the work of stelzner et al 2021 has not been published and that codedata for their approach is not publicly available making comparisons extremely difficult and should therefore not be a requirement for publication of this work i hope that future work in this direction will pick up this issue 2 section 33 coarse to fine training i agree that rendering images with the volumetric rendering framework proposed in nerf requires many evaluations per ray so reducing the number of rays sampled during training makes sense one detail that is missing in this section is how many samples per ray are used moreover do the authors still use the two networks for ray evaluation as nerf coarse and fine ones if not why not second in your approach you sample random patches during the fine training stage and downsample the images during the coarse training stage in this paper graf generative radiance fields for 3daware image synthesis the authors propose a different sampling strategy which does not require downsampling would this strategy perform better worse 3 section 41 segmentation experiment results it would help to discuss the shortcomings of slotattention namely why the method fails on the segmentation task my guess why uorf works better would be due to the two disjoint latent spaces for foreground and background i think this should be highlighted here 4 section 43 scene design and editing setup i did not understand how the setup for modifying the foreground object poseappearance works i think you can switch the latent embeddings for background as you have a onetoone mapping but this is not the case for the foreground objects could you explain this process in more detail 5 appendix b2 coordinate space here you mention that you use a foreground box to encourage disentanglement of foreground and background slots how does this foreground box work what is its influence on the final result mentioning this maybe crucial detail only in the appendix is not sufficient in my opinion and should be better explained in the main text i vote to accept this paper for publication at iclr 2022 i like the idea of modeling scenes as a combination of disjoint objects which can be added removed modified and recombined to form new scenes i also think the paper is well written wellmotivated and provides extensive experiments in my opinion the paper adds to the literature on neural scene representationdecomposition and is interesting to the community i have some minor suggestions and questions see above which i hope the authors clarify during the rebuttal ### Summary:
this paper develops a method for decomposing scenes into objectspecific neural radiance fields after the discussion phase two reviewers support acceptance empirical results on multiple synthetic datasets and benchmarks appear convincing the rebuttal also added an initial demonstration of generalization to real images
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work studies the question to what extend a reparametrization of an optimization problem ie representing the original parameters w to optimize for as a function of some other parameters theta can accelerate the convergences of the gradient flow gradient descent for nonconvex optimization problems it studies the dynamics of the flow via eigenvectors of a matrix m formed as the expectation over the outer product of the gradient of the loss with itself to reveal slow and fast modes of the evolution it subsequently derives sufficient conditions for the reparametrization which is chosen to be linear but time varying to balance the decay on all modes after discussing an efficient approximation of the theoretically derived scheme numerical results demonstrate the effectiveness of the proposed reparametrization in two exemplary applications in terms of its strength this paper contains interesting thoughts about the intriguing idea that a temporally varying linear reparametrization of the unknown can accelerate the gradient flow based optimization the general topic the combination of theoretical analysis and numerical experiments and the bridge between the two by using efficient numerical approximations of what the theory demands are strength of this paper and although the numerical experiments are certainly not exhaustive there is some proof of concept of the benefit in the particular applications considered here unfortunately the paper also has some clear drawbacks in particular i found the paper difficult to follow and the main idea from an optimization perspective appears to be unnecessarily hidden in a framework on neural reparametrizations unless i misunderstood the main idea significantly the neural reparametrization illustrated by a neural network in fig 1b later turns out to be a linear parametrization only ie considering the gradient flow for theta in wt kappat thetat instead of in the original variable w before considering this to be a graph neural network i would have been interested in how this idea relates to other classical optimization methods has the idea of temporally changing but linear reparametrization not been considered in the optimization literature before as kappa turns out to be the square root of the inverse of the hessian is there a relation to newton or quasinewton methods for me the paper would have been easier to follow from this more classical optimization perspective in particular the considered gradient flow resulting from the linear reparametrization seems to be partialt thetat kappatt nabla l kappat thetat and should be stated explicitly if now the change in kappa is negligible slow in comparison to the change in theta and if kappa represents the scaled square root of the inverse hessian isnt that the flow arising from newtons method i would much rather prefer a clear motivation and presentation of the paper from such a classical perspective before delving into graph neural networks some minor aspects in equation 2 there is an epsilonij but i think the way eq 1 is written it is unclear what offdiagonal elements in epsilon mean of course the delta ensures there are no offdiagonals but then i would avoid the notation equation 1 is also an ordinary differential equation i would call it a partial differential equation i am sometimes not sure which quantities are random variables and which ones are not in eq 4 for instance random variables seem to have been dragged out of the expectation which i do not understand an example of why the paper was a little difficult for me to follow are sentences like when running gd the maximum change in w is bounded to ensure numerical stability this sounds like a modification of gd like gradient clipping but it is actually meant as a condition to limit the step size you are using thus isnt the reasoning flipped ie in order to ensure numerical stability we have to bound in the entire analysis it could be made clearer that m is time dependent the first sentence of section 23 is the first time where it is really prominent the discretization for time dependent matrices might of course make the behavior of the actual algorithm differ from the continuous gradient flow before eq 10 it is exemplified that w sigmaatheta b would be a valid choice a b and sigma are however not defined and if sigma refers to a nonlinear activation function i do not see how this is true page 2 abounded page 6 adaptove learning rates if the numerical experiments are carried out with adam shouldnt the theory also consider effects like adaptively scaled momentum in fig 2 why does gcn1 seemingly start with a much lower loss function value than the other methods does it have a sharp drop at the beginning gcn with a2 as the propagation rule achieves the highest speedup what is a2 please define where difficult to separate the slow and fast modes where it is difficult the numerical results are to my mind not a strong indication of the proposed approach being a universal way to accelerate gradientbased methods in particular i am wondering how specific the acceleration results are to the applications also what amount of hyperparameter tuning is required for the proposed approach to work well although i like the general idea and do believe that reparametrization can balance out different convergence speeds of different modes to some extend i found the presentation to be a little confusing the appoach seems to reduce to a linear reparametrization which seems to relate it to other more classical approaches along with the list of minor aspects that make the paper a little difficult to follow i need some clarification on this aspect docsepthis paper proposes a reparameterization of nonlinear nonconvex optimization problems this reparameterization amounts to a linear map ie optimization params linear operation of a different set of parameters these linear maps are interpreted as a graph convolution network the experimental results are validated on kuramoto models and persistent homology models strengths the idea of reparameterization is nice weaknesses the experimental evaluation consists of two problems that are not of interest to the iclr community i have certainly never seen either of them used in a ml paper i have no idea how they relate to actual optimization problems i care about ie training deep neural networks the experimental work doesnt look thorough where are the learning rate sweeps comparisons to other optimizers etc etc the paper spends a substantial of space pg 24 deriving well known results under assumptions that amount to strong convexity lambdamax to lambdamin ratio controls covergence i strongly suggest that the authors use the results and language of optimization rather than going from first principles for no good reason the final reparameterization is not very interesting although much ado is made about using a neural network parameterization its just a linear map at the end of the day since the reparameterization is linear this makes the overall idea very similar to a preconditioner this should be touched on and compared to eg kfac shampoo the many other linear preconditioners that people use as with the optimization comment above i think this work needs to be grounded more in the literature gcns are tangentially relevant but dont seem to be used in any really meaningful way technical comment right after eqn 15 it says that h is positive semidefinite where does thos come from isnt the base problem meant to be nonconvex in which case by definition h should have some negative eigenvalues at some point this paper is clearly unready for publication the main idea using a structured linear reparameterization is underdeveloped and the experimental results are on problems that the iclr audience dont really care about docsepthe authors derive a neural reparameterization of nonconvex optimization problems in order to accelerate their convergence they do this by deriving how the slowest components of the optimization variables can have their convergence rate improved by preconditioning with a ntkbased matrix they make connections between this approach and group convolutional networks experimentally they show this approach improves upon baseline gradientbased optimization on a two datasets main comments overall i think the paper is quite novel and the experiments fairly convincing i really enjoy how much the authors walk through the individual steps of the gradient math which derives their neural reparamaterization in 21 and 22 it is easy to follow and clear however one drawback of this approach is that it appears that it seems to only help the early stages of optimization as this is how it is used in the experiments i think the authors should take more care to make this point more clear in particular what prevents one from using this hessian approximation for barm as in section 23 in early stages of training when using adam it would be nice to see ana ablation of the different components of their method to understand exactly what component of the approach is contributing to the improved performance how does this approach compare to gradientbased optimization in terms of memory consumption how would this scale to largescale datasets with larger parameter spaces eg deep network training minor points the authors seem to pose the title and introduction to refer to any nonconvex optimization problem but in some parts of the paper they seem only focused on neural network optimization eg fig 1 it would be good to smooth out these inconsistencies the abstract on openreview and the abstract in the article do not match in the experiments why is the term linear used to refer to the gradientbased baselines i am not sure this is the best term to use and was confusing to me upon my first read overall i lean slightly towards acceptance this is due to the clarity and novelty of the paper as well as encouraging experimental results however i think some more experimental verification is needed for ablating the different components of the proposed approach and for demonstrating its applicability to a broader range of problems docsepthis work proposed a neural reparametrization scheme to accelerate a large class of nonconvex nonlinear optimization problems the proposed method is grounded on analysis that the dynamics of gradient ow are related to the condition number of the system more specifically by reparametrizing the optimization problem with a graph convolutional network gnn the proposed method can modify the condition number and obtain convergence speed up the acceleration is demonstrated on optimizing synchronization problems and persistent homology of pointclouds the paper introduced a new network reparametrizing method for accelerating optimization for nonlinear problems overall the reviewer finds the paper is a bit hard to follow and the presentation of the paper can be significantly improved the experiments are interesting but the comparison is not quite comprehensive first the reviewer is not fully convinced by the benefits of reparametrizaiotn the reparameterization using a neural network can improve convergence speed but on the other hand the memory cost could be higher second it is a bit unclear to the reviewer why in section 22 the authors considered the ntk the weights for ntk require an ultrawide network and the weights barely change it is a bit abrupt without much explanation of the motivations behind it third the speed up in figure 2 does not seem impressive the authors only compared with a very baseline optimizer more comprehensive comparisons are needed to draw the conclusion overall the reviewer finds the paper is a bit hard to follow and the presentation of the paper can be significantly improved the experiments are interesting but the comparison is not quite comprehensive ### Summary:
this paper proposes speeding up certain optimization problems common in physics by reparameterizing their parameters as the output of a graph neural network the reviewers appreciate the idea but are not convinced enough to recommend the paper for acceptance they point out the following weaknesses the method amounts to linear preconditioning and hence its reasonable to expect a fairly complete comparison to the many linear preconditioning approaches that have been proposed previousl the reviewers are not satisfied with the currently provided comparison the main idea is not presented clearly enough in particular its not obvious the proposed method is best described as neural reparameterization since it seems to amount to linear preconditioning the experiments are not persuasive enough the presented problems may not be relevant to all of the target audience of iclr and the experimental evaluation does not seem sufficiently exhaustive the suggested areas of improvement provided by the reviewers seem reasonable to me i therefore recommend not accepting the paper in its current form to make the paper more accessible and appealing the authors may consider rewriting the paper to more closely match the perspective taken by the reviewers and to provide a more thorough comparison to the previous approaches and the existing literature